When recovering a file with ProDiscover, your first objective is to recover cluster values

Acquiring Data, Duplicating Data, and Recovering Deleted Files

Littlejohn Shinder, Michael Cross, in Scene of the Cybercrime (Second Edition), 2008

Canon RAW File Recovery Software

Canon RAW File Recovery Software (CRW Repair), a free tool developed by GetData Software Development (www.getdata.com), can be downloaded from www.crwrepair.com. Canon cameras usually store images in a JPEG format, with RAW images stored inside a file with a .crw extension. The CRW file is generally used for processing photos, and allows the photo's exposure, white balance, and other elements to be manipulated. The CRW file also allows users of the camera to access the JPEG quickly. Unfortunately, because it is a complex file, it can easily be corrupted by such things as a change in file size. CRW Repair will examine the file size to ensure that it's correct, and has the ability to access and extract the JPEG image.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492768000078

Computer Forensic Software and Hardware

Littlejohn Shinder, Michael Cross, in Scene of the Cybercrime (Second Edition), 2008

R-Studio

R-Studio is a family of powerful and cost-effective data recovery software. It recovers files from FAT12, FAT16, FAT32, NTFS, NTFS 5 (created or updated by Windows 2000/XP/2003/Vista), HFS/HFS+ (Macintosh), little- and big-endian variants of UFS1/UFS2 (FreeBSD, OpenBSD, NetBSD, and Solaris), Ext2FS (Linux), and Ext3FS (Linux) partitions. It functions on local and network disks, even if such partitions are formatted, damaged, or deleted. The suite includes a variety of different tools, and is available from www.data-recovery-software.net.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492768000066

Data Hiding Forensics

Nihad Ahmad Hassan, Rami Hijazi, in Data Hiding Techniques in Windows OS, 2017

Analyzing the Digital Evidence for Deleted Files and Other Artifacts

The acquired image will contain the existing files, in addition to deleted files and other file fragments. As we said before, when you delete a file on a disk drive or USB, this file will not get deleted immediately. In this exercise we will recover deleted data from the image we already have.

1.

Open ProDiscover® Basic, go to File ≫ New Project, insert the project number and name in the dialog in addition to some comments if you want, and click OK.

2.

From the tree view of the main window, click Add to expand this button, then click Image File to select the image you acquired in the previous section (see Fig. 6.36).

When recovering a file with ProDiscover, your first objective is to recover cluster values

Figure 6.36. The tree view in ProDiscover® Basic.

3.

After opening your acquired image in ProDiscover®, go to Content View and expand it if necessary, then click to expand Images. Finally expand your image file by clicking the + sign on the left side of your selected image (see Fig. 6.37).

When recovering a file with ProDiscover, your first objective is to recover cluster values

Figure 6.37. Expand Content View inside the ProDiscover® Basic main window to view image files.

When clicking your acquired image, the ProDiscover® work area will be populated with the image file’s contents. Below the work area you can see the Data Area where the content of each selected file in the Work Area appears (see Fig. 6.38).

When recovering a file with ProDiscover, your first objective is to recover cluster values

Figure 6.38. Selecting a file and viewing its contents in the data area.

You can note from Fig. 6.38 that the work area contains many deleted files that have not been overwritten yet and can be easily recovered. To recover a deleted file, right-click it and select View to view this file using the standard application associated with opening this type of file. You can also select Copy to copy the file to another location where you can store it for later use (see Fig. 6.39).

When recovering a file with ProDiscover, your first objective is to recover cluster values

Figure 6.39. View or copy a deleted file using ProDiscover® Basic.

If your acquired images contain a large volume of data, investigating them may take a long time, which may not always be possible. For this reason, ProDiscover® Basic offers a search facility that accepts both string and hexadecimal values as search keywords.

To test the search facility; go to the tree view and click Search. The search options dialog box appears. Select the Content Search tab (see Fig. 6.40). From this point, you can configure your search options by choosing your keyword type (hex or string), search inside metadata (like MS Office files metadata). You can also filter your finding by date (date created, modified, or accessed). The search dialog is rich and contains other tabs (the Cluster tab allows you to search for data at the cluster level). The Registry search is for searching inside the Windows® registry (this requires an image taken for the Windows® registry). The Internet History Search tab is for searching inside previous visited websites, and finally, the Event Log Search tab is for investigating inside Windows® log files.

When recovering a file with ProDiscover, your first objective is to recover cluster values

Figure 6.40. Searching for a specific keyword in the search dialog using ProDiscover® Basic.

For example, if we want to search for a name in all files, including their metadata, in the search option dialog, type the information shown in Fig. 6.40.

ProDiscover® Basic will conduct a search inside a selected image and show all files that contain the specified keyword (see Fig. 6.41).

When recovering a file with ProDiscover, your first objective is to recover cluster values

Figure 6.41. Search result pane showing the keyword search result.

You can view or save any file from the resultant search by right-clicking the file and then selecting View or Copy File as we did before. ProDiscover® allows you to conduct more than one search and open each search result in a separate panel allowing more flexibility for its users as shown in Fig. 6.41.

If you do not prefer to use this method to recover deleted data (acquire an image and then investigate for deleted files using computer forensic tools like ProDiscover® Basic), you can install data recovery software. There are many free programs that perform this function; keep in mind the following important things when using such tools:

1.

When using a data recovery tool, make sure you install this tool on a PC other than the one you want to recover data from. It is better to unplug the hard drive of the PC you want to retrieve deleted files from then attach it to another PC that contains data recovery software. Recovering deleted data from one machine into itself may render your lost data permanently unrecoverable.

2.

Not all data recovery software offers the facility to conduct a deep search as the one offered by many computer forensic tools. For example, ProDiscover® allows you to search inside file metadata for a specific keyword, which allows you to filter data very fast and save time.

3.

You should select the data recovery software according to the file system used on the drive you are recovering data from. For example, most hard disks in Windows® PCs use the NTFS file system, however USB flash drives usually use some variant of FAT (FAT16, FAT32, or exFAT).

4.

In order to record your findings in a legal digital forensic investigation, you must acquire a disk drive image using a reliable computer forensic tool such as ProDiscover® Basic, FTK® Imager, or any similar computer forensic program and then perform your data recovery. This is one part of making your evidence stand in a court of law.

Some data recovery programs for Windows® are:

1.

TestDisk: http://www.cgsecurity.org/wiki/TestDisk

2.

PhotoRec: http://www.cgsecurity.org/wiki/PhotoRec

3.

Recuva: https://www.piriform.com/recuva

4.

EaseUS Data Recovery Wizard: http://www.easeus.com/datarecoverywizard/free-data-recovery-software.htm

5.

A list of data recovery tools for Windows® OS is on the Source Forge website: http://sourceforge.net/directory/system-administration/storage/recovery/os:windows/; and another list exists on the Forensics Wiki website: http://www.forensicswiki.org/wiki/Tools:Data_Recovery

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128044490000063

Ensuring High Availability for Your Enterprise Web Applications

Shailesh Kumar Shivakumar, in Architecting High Performing, Scalable and Available Enterprise Web Applications, 2015

2.8.3 Recoverability

This attribute depicts how well the system recovers from an error scenario. Few aspects of this are covered as part of “fault tolerance” and “transparent load balancing” discussed earlier. While the fault tolerance is achieved at the application level by graceful degradation of functionality, transparent load balancing is achieved at the hardware level using optimal load balancing algorithms. Similarly, the recoverability of the system also indicates how well the system provides a “transparent failover” to standby nodes when the primary nodes fail.

In addition to these two aspects, another crucial aspect of the high available application is its “time to recover,” that is, after suffering from a critical failure, how quickly an application can come back to its original state and resume normal operation.

Architecting a truly fail-safe system is a creative challenge because it needs to address a multitude of systems and layers. The novel approach mentioned below addresses the major problems related to data and configuration, which comprise about 80% of recovery scenarios.

Various modules of the automated data and software recovery framework are shown in Figure 2.10.

When recovering a file with ProDiscover, your first objective is to recover cluster values

Figure 2.10. Automated data and software recovery framework.

The automated data and software recovery framework primarily consists of a recovery module, which continuously performs two tasks for systems under monitoring (SuM): monitoring and backup. In the above diagram, SuM includes a database, file system, and server configuration data. However, this framework can be extended to other systems of record as well, such as a CMS, an ERP system, and so forth.

During the monitoring phase, the monitoring agent continuously pings the SuM with heartbeat messages. The “ping message” is specific for a given system; for instance, for the database, it would be a simple SQL query; for a services system, it would be a service call; for a file system, it would be checksum of the files. If the ping message returns success, then the agent takes the snapshot of the source data in the backup module. The type of backup is again dependent on the type of source system: for the database, it will be data of committed transactions for a specific time period; for the file system, it will be changed files. These will be copied to the backup module to create a “rollback point.”

If the monitoring agent discovers that any of the system or services is down or if they are not responding within a configured threshold time period, then it signals the control switching module. The control switching module then transparently switches the calls and control to the backup module instead of to the source systems. As this happens transparently to the downstream systems, the transactions would proceed uninterrupted. The control switching module also notifies the system administrator about the failure. Once the source system comes back, the data from the rollback point is synched back to the source system, along with the control of the source system.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128022580000020

Collecting and Preserving Digital Evidence

Littlejohn Shinder, Michael Cross, in Scene of the Cybercrime (Second Edition), 2008

Computer Forensic Equipment and Software

A number of companies including Guidance Software (www.guidancesoftware.com) and DIBS (www.dibsusa.com) market special equipment to aid in forensic examinations. The following types of equipment can be useful to investigators and forensic technicians:

Imaging equipment These devices allow you to rapidly make bitstream copies of hard disks onto another hard disk, an optical cartridge, or a tape. Portable units that fit into a suitcase are available and can be easily transported to the crime scene to make disk copies on-site before the computer is shut down. The target media include write-protection features to ensure that data cannot be tampered with after the copies are made.

Forensic workstations These are complete computer workstations set up for easy reconstruction and analysis of copied drives, usually with removable drive racks that allow booting of the “working copies” of suspect disks. Analysis software is installed to assist in searching for particular types of data using artificial intelligence techniques or fuzzy logic to conduct searches when the investigator isn't sure of the text strings or file types he or she is looking for. Data recovery software is installed to locate data from “deleted” or “erased” files. Mobile workstations set up on portable computers are also available. Examples include the DIBS forensic workstations and F.R.E.D., the Forensic Recovery of Evidence Device, which is made by Digital Intelligence (www.digitalintel.com/fred.htm).

Forensic software Packages provided by companies such as Guidance Software, NTI, and DIBS include imaging software, “undelete” programs, comprehensive file and text string search programs, programs that can verify the accuracy of bitstream copies, programs that can remove binary characters from data to ease analysis of the data, programs that quickly document lists of files and directories, programs that can capture the data in unallocated space or file slack space, programs that can rebuild cache, uncompression tools, system-checking utilities, steganography detection software, password recovery programs, and much more. For a list of some of the best computer forensic software programs, see the Timberline Technologies Web site at www.timberlinetechnologies.com/products/forensics.html. Also, NTI provides several free forensic tools at www.forensics-intl.com/download.html.

On the Scene

Building a Forensic Workstation

You can build your own forensic workstation using either a portable or a desktop computer instead of buying the prepackaged hardware/software combination. The system should be powerful enough to run forensic application software, and to avoid having to upgrade the equipment too soon, it should have the most powerful processor and most amount of RAM available (or at least that you can afford). To store evidence files that are created, you will also need a significant amount of hard disk space. It is not uncommon for computer forensic labs to have terabytes of hard disk space to store the evidence files, which will also need to be backed up on a regular basis in case of a hard disk failure or other problems.

The workstation should run an operating system compatible with your forensic application software. You might find it useful to set up a dual-boot configuration so that you can boot into either Windows or Linux, or you can run VMware (www.vmware.com) virtual machines to allow you to view an New Technology File System (NTFS) formatted disk, for example, from within the Linux operating system using a Windows virtual machine.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492768000157

The Difference between Computer Experts and Digital Forensics Experts

Larry E. Daniel, Lars E. Daniel, in Digital Forensics for Legal Professionals, 2012

8.2 The digital forensics expert

Digital forensics examiners may or may not come from a background of working with computers. Many law enforcement examiners start out as police officers who do not have computer backgrounds, but who are selected for various reasons and then attend training for computer forensics. Others begin life as computer support people and subsequently get training in forensics.

Having in-depth knowledge of computers and software in general is not a prerequisite for a digital or computer forensics examiner. It can certainly be a plus, but it is not a requirement. A computer forensics examiner is trained to work with specialized tools to perform recovery of data and to analyze that data in a forensic manner. What that means is that a computer forensics examiner is focused on the examination of recovered data from an evidentiary standpoint. What does the evidence mean in light of the case at hand? The computer forensics examiner must be able to determine facts about the data, not just recover the data.

What matters in the training and development of a computer forensics expert is the focus on the handling of evidence, the investigation of the alleged acts, working within the law, and the ability to present findings in a legal matter.

To give an example, suppose that your client is accused of deleting data from a hard drive after a preservation hold has been put in place. Furthermore, add in the fact that when the computer is examined, a file-wiping software program is discovered to have been on the computer.

Examiner 1, who is a computer support person with no forensics training, examines the hard drive, runs a file recovery software application against the hard drive, and recovers hundreds of deleted files. Then in his report, he states that because the computer had file-wiping software installed on the computer, he was not able to open all of the recovered files. During his examination, he also operates the client’s computer.

In his report Examiner 1 states the opinion that the computer owner had last run the file-wiping software two days after the court hearing. He also states that the file-wiping software permanently deletes files. Lastly, he states that because the file-wiping software was run two days after the court hearing, he could not open some of the files he recovered.

Examiner 2, who is a trained forensics examiner, also examines the hard drive from the client’s computer. However, Examiner 2 first removes the hard drive and makes a forensic copy without ever turning the computer on. He then examines the hard drive forensic copy and also recovers hundreds of deleted files. He notes that the only evidence of a file-wiping program is the empty directory where the file-wiping software was installed. Inside that folder, he notes that the only file remaining is a system file with a date that is two days after the court hearing on the preservation order.

Next he locates and downloads a copy of the same version of the file-wiping program onto a clean test computer and then runs the software and subsequently uninstalls the software to determine how it works and what it does when it is uninstalled. He also determines that while the computer was in the custody of Examiner 1, over ten thousand files were accessed on the client’s computer.

Examiner 2 determines that the file-wiping software is designed to permanently delete files by overwriting them with zeroes. By examining the raw data on the hard drive forensic copy, he notes that sets of zeroes are not found on the hard drive—that would be evidence of overwritten files.

Examiner 2 concludes that the file-wiping program was removed from the computer two days after the court hearing. He also concludes that the file-wiping software, while present, did not prevent the recovery of thousands of files, indicating that the software was never run against the client’s drive to remove files of interest. He also concludes that if the file-wiping program were run against the drive, the evidence of such would be the presence of a known overwrite character repeated in sections in the raw data on the hard drive, and this was not present.

Examiner 2 notes that the conclusions of Examiner 1 are directly contradictory to one another.

Figure 8.2 illustrates the focus of digital forensics experts on the areas of technical knowledge, investigative techniques, and the legal system.

When recovering a file with ProDiscover, your first objective is to recover cluster values

Figure 8.2. Knowledge areas of digital forensics

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496438000080

Enterprise Architecture Case Study

Shailesh Kumar Shivakumar, in Architecting High Performing, Scalable and Available Enterprise Web Applications, 2015

9.6 Adding high availability features for ElectronicsDeals online

The specified details for ElectronicsDeals are as follows:

The online application should be available 99.999% of the time across all the geographies where ElectronicsDeals Inc. operates.

The services exposed by ElectronicsDeals Inc. should be available 99.99% of the time.

We have already implemented many of the availability-related patterns and best practices, as part of implementing scalability. Let us quickly look at some of the features we have already incorporated and at how they can contribute to the high availability goal (Table 9.6).

Table 9.6. ElectronicsDeals high availability features

Availability featureElectronicsDeals implementationAdvantages
Failover

Clustered configuration: The web server, application server, content server, and database server are provided with two clusters of four nodes each

HA nodes: Nodes within each cluster help in intracluster failover

HA cluster: If the entire primary cluster fails, the requests will be handled by the standby cluster

HA site: If both the primary and standby cluster fail, the disaster recovery environment will become active and take over the operations.

Session failover: Full cluster-wide session replication is enabled across all nodes of the cluster. This enables the seamless session failover with minimal disruption to user experience and high availability

Message failover: ESB infrastructure enables built-in message failover to ensure that product post functionality is always available. Reliable message delivery configuration is also leveraged

Ensures high availability of all critical servers
DR strategy ensures business continuity during natural disasters
Handles failure of any node within each cluster
Handles failure of primary cluster
Handlers failure of entire primary live site
Stepwise functionality degradation

If any of the source system or service fails, availability of critical functionality is ensured using cached values. For instance, if the products database is down, then the product search functionality would get the matching products from cache

Some functionalities would still operate in spite of failure of back-end services. For instance, if the product pricing service is down, then the product details page would still operate and show the details of the product without pricing information. Customers can request pricing details, and they will be notified later when pricing services come up

Availability of critical application functions will be ensured even though the back-end systems are down
Asynchronous & Services based enterprise integration

Internal product-pricing system is integrated using RESTful web services

Product feature functionality is implemented using message queues in distributed ESB, which acts as MoM

All web components that fetch the data from the back-end and upstream systems will do so asynchronously. Functionality such as product detail, product search, and postproduct all use nonblocking, asynchronous calls and are lightweight in nature

Asynchronous invocation of lightweight services ensures high availability and nonblocked page loads
Messaging infrastructure guarantees high availability of services and QoSs
Stateless & lightweight application components

The product-pricing service is made stateless using RESTful web services

All web components integrated with external interfaces are developed using AJAX technology and are lightweight in nature

Stateless nature of services ensures high availability, because the services can easily be distributed among multiple nodes and also failover can be done efficiently
Virtualization

Storage virtualization is utilized for storing shared global files

Virtualized storage offers high availability, leveraging multiple file servers
Layer-wise caching Caching is implemented at each level of the ElectronicsDeals application. More details are explained in the performance section Caching minimizes the call to origin systems and services and hence ensures maximum availability. If any of the systems is down or if there are availability issues, it will not impact the entire availability chain thanks to caching

We will also implement the features of the 5R model proposed in the “Availability” chapter.

Reliability: ElectronicsDeals online is made highly reliable by employing a wide range of fault-tolerant techniques at the hardware and software layers. Multi-node clustering, backup clusters, and the DR site increases the fault tolerance at the hardware level. Stepwise degradation of functionality and error handling increases the fault tolerance at the software level.

Replicability: Application data and configuration are replicated across the primary cluster and standby cluster in a live environment and between the live site and DR site.

Recoverability: Adoption of a robust, fault-tolerant mechanism, and using an automated data and software recovery framework ensures that the system is recoverable from error scenarios. Additionally, the following recovery features are also implemented:

Checkpoint and transaction roll-backing for critical transactions such as order processing ensures that the system recovers from a failed transaction.

Robust exception and error handling features also help recoverability.

Reporting and monitoring: A robust monitoring and notification infrastructure is set up to monitor internal and external applications.

Redundancy: employing a multi-node standby cluster to handle failures of primary cluster follows the N+M redundancy model.

The following high availability features are implemented for ElectronicsDeals’ online application:

Very high availability for the global gateway page and product home page: ElectronicsDeals features new product launches, sales promotions, and marketing campaigns, and it connects with its customers through a collaboration platform on a global gateway page. It is therefore extremely important for it to be available always. Similarly, the product home page is also critical to ElectronicsDeals’ business. In order to achieve maximum availability for these two pages, we need to minimize the points of failure in the availability chain. We can leverage the static nature of these two pages to our advantage, for achieving maximum and continuous availability. We will achieve the maximum availability with these two techniques:

Host these two pages fully on a CDN platform along with static assets. Since a CDN network has geographically distributed servers, it offers high availability, redundancy, and performance.

Additionally, cache these two pages using full-page caching in the web server. Thus, when the CDN wants to refresh the pages, it will be served through the web server cache.

For ensuring proper content refresh, whenever the content in these two pages is updated, invalidate the web server cache and CDN cache in the same order. The publishing workflow of the CMS system can invoke the cache invalidation procedures of the CDN and web server.

High availability for services: ElectronicsDeals’ platform mainly exposes and consumes two functionalities via message and service. The product-posting functionality, which is used by third-party merchants and product vendors, employs message queues wherein merchants post the product details they want to see via a message. The message will be sent to a queue that is processed by the ElectronicsDeals application. ElectronicsDeals uses a REST-based web service to fetch the price details of a given product from the internal product-pricing ERP system.

In order to ensure high availability of these two components, we will use a distributed ESB server. We have already seen the scalability, flexibility, QoS, interoperability, and availability offered by ESB.

By providing clustered ESB, the robustness of the ESB would further increase because it provides redundancy and failover support.

Monitoring and notification: Various monitoring infrastructure components will be used for continuously monitoring the ElectronicsDeals enterprise application:

Web analytics-based customer behavior monitoring: Key pages and transactions will be tagged with web analytics scripts to get real-time insights into customer behavior, problem patterns, and browsing patterns. Product landing pages, the shopping cart page, order checkout page, and product search pages will be tagged with web analytics scripts. The key performance indicator (KPI) that would be tracked includes:

Average page load time: Average time taken for complete page load

Availability: Total availability of each page

Geo-specific page load time: Page load time for each geography

Conversion ratio: Number of people who placed orders / Total number of visitors

Bounce rate: Number of people who bounced to a different site

Returning visitors: Total number of repeated customers

New/Return visitor conversion ratio=Number of people who placed orders / Total number of new/return visitors

Items per order rate: Average number of items per order

Average order placement transaction completion time

Average product search completion time.

By constantly monitoring these KPIs in web analytics reports, the business analysts and marketing team can fine-tune their online strategy. Some examples are given below:

The conversion ratio is the critical KPI that has a direct impact on business revenue. If the conversion ratio is imbalanced, then we can look at various factors that are affecting the conversion ratio such as page performance, bounce rate, and many others.

If the bounce rate is too high for any given page, we can do a drill-down analysis of page component-wise performance, user experience elements, and other aspects.

If the average order placement transaction time exceeds 2 s, then drill-down the performance of order page elements, related business objects, and the performance of the external payment gateway service, to take corrective actions.

System health check monitoring: This internal monitoring infrastructure regularly monitors the CPU utilization, memory usage, and disk utilization of all nodes of the web servers, application servers, database servers, and content management server. The following thresholds will be configured:

80% CPU utilization

70% memory utilization

70% disk utilization.

As soon as the threshold value is exceeded, the internal monitoring will automatically notify the system administrators.

In addition to this system, a health check monitoring infrastructure will also constantly check the “up-and-running” status of all involved systems and applications working in the ElectronicsDeals enterprise application, including the product-pricing service, product database, and clustered ESB.

External application performance monitoring: Active real-time application performance monitoring will be done through automated bots that are configured to check the ElectronicsDeals application from these geographies—North America, Europe, and Asia-Pacific—to gather the end-user experience and perceived response times. The following thresholds are configured:

Product page response time is more than 2 s consistently for more than an hour in the North America geography and 5 s consistently for more than an hour in other geographies.

Product search results take more than 3 s consistently for more than an hour in all geographies.

Global gateway page and product home page availability drops below 99.999%.

Product-posting page availability drops below 99.999%.

Any of the threshold violations mentioned above would automatically trigger notification to the application maintenance team.

Disaster recovery environment: As we have seen, establishing a robust disaster recovery management process is a critical part of the business continuity process (BCP). For the availability requirements of the five nines (99.999%), we need to ensure that we set up the DR environment to fulfill those objectives.

To begin with, we need to establish the values for the recovery time objective (RTO) and recovery point objective (RPO). For the ElectronicsDeals enterprise application we set RTO as 10 minutes and RPO as 5 hours and established the processes accordingly.

DR site setup: The DR site should be set up with exactly the same configuration as the primary live environment. It handles requests when the primary site is down or if it is facing availability and performance issues. It is also used to handle requests during peak load.

Fulfilling RTO: In order to achieve the RTO objectives of less than 10 min we need to configure the global master load balancer to constantly monitor the status and response time of all clusters of the primary live site. The checking has to be done every 2 min. If the clusters are not responding for two consecutive checks, then the system has to send the response to the disaster recovery site.

Fluffing RPO: In order to achieve the RPO of 5 h, the replication process between the primary live site and DR site should happen once every 4 h. The replication process should copy the data and configuration for all the systems, including the Lightweight Directory Access Protocol (LDAP) server, web server, application server, database server, product production ERP system, and CMS, as shown in the diagram below (Figure 9.3).

When recovering a file with ProDiscover, your first objective is to recover cluster values

Figure 9.3. ElectronicsDeals disaster recovery jobs.

Testing external interfaces: Availability testing has to be carried for end-to-end components including the third-party components. In our case, we are using a chat widget and payment gateway page. We need to constantly monitor the availability and performance of these external interfaces.

Testing the DR site: Before deploying the DR site, we need to test the configuration and replication of the DR environment to ensure that specified RTO and RPO are achievable.

Data synchronization and replication: Regular synchronization jobs need to be set up to synchronize the data between the primary site and DR site. Synchronization happens for web server configuration, application server data, database data, file storage data, content in CMSs, and user data in the LDAP system.

A complete DR setup with synchronization jobs is shown in Figure 9.3.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128022580000093