Rethinking backups to combat ransomware
- By Pritesh Parekh
- Sep 07, 2021
One of the reasons ransomware attacks are successful is that victims cannot afford to be offline. Even if organizations have backups, the associated data loss and disruption caused by long restore times is often more costly than just paying off the criminals and being done with it.
Sometimes it’s even worse: Cybercriminals have been known not only to encrypt an organization’s data but also to target the backups themselves. In fact, on average, only 69% of health care data could be restored even after the organizations paid and got the decryption key.
This situation could be avoided if there were a way to reliably restore data from trusted and secure backups in minutes, not days or weeks.
There is, but it requires thinking about backups and data differently.
Backups’ soft underbelly
Cybercriminals leverage one of the most obvious and fundamental facts about backups: backups files are written and read by the same networked operating system that are used for day-to-day business. Of course, backup files are special: They require a high level of permission to be accessed, they are compressed, redundant copies may be kept in other locations, they are kept up-to-date with a frequency that depends on how long it takes to update them and the like. But ultimately, they are just files on the system.
The integrity of those systems, therefore, depends on how secure the organization’s system is. Clearly, if criminals can encrypt an agency’s production data, then that system can be compromised, and its backups are also at risk.
Even if the hackers can’t get to the backups, the recency of those backups is a critical factor. Once-a-day backups leave an incredible amount of data unprotected, and the loss of one day’s worth of data is more than a bother, it’s often catastrophic. Imagine the financial or health-care sectors, where the loss of an hour’s worth of data can open the organization up to serious liabilities. For modern businesses, the speed and reliability of backups matter.
In addition to the loss of data, time to restore the data is another critical factor. Depending on the size of the data stores, the restoration from backups can take several hours to days resulting in unacceptable business disruption.
Rethinking backups with “air gaps”
If paying the ransom is Door #1, and successfully restoring from a backup is Door #2, agencies can make a two-part change that would let them walk through that second door. The first part of the change is to isolate the backup network and remove system-level access to backups, creating a type of logical “air gap” between the two systems. Of course, the backup system remains connected to the rest of system -- otherwise, agencies wouldn’t be able to do backups or restore -- but even a hacker who has access to production data will be locked out of the backup files.
Think of this “air-gapped” backup system as a separate data appliance: It looks to the operating system like a physical device that runs by its own rules, but it is in fact a virtual device that can read and write to the system when given the proper login credentials. It’s important that these credentials are 100% independent of the credentials expected by the main system. The original backups must be kept as read-only data behind the appliance’s sturdy door.
Such an appliance creates the second change necessary to make Door #2 a viable alternative to paying the ransom: It creates a virtual data space. It not only manages the storing of the agency’s data on physical media, with as many off-site copies as needed, it also creates a virtualized copy of that data for production. This means that the backups can be restored in minutes, avoiding multiple days of downtime and disruption. Moreover, the frequency of backups can be increased to minutes or even to real-time, minimizing the data loss during the restore process.
Adopting such an approach affects more than how backups are made. It means that data requests can be accommodated in seconds or minutes, a huge benefit for agencies that rely on real-time transactions, and for research and development departments doing innovative work that depends on rapid iteration.
The ability of ransomware to reduce an organization’s choices to two -- pay up or lose days hoping restoring from a backup will work -- depends on a system architecture for data that needs to be urgently updated. It’s imperative agencies explore a new data architecture and rethink how they protect their backups against the scourge of ransomware.
Pritesh Parekh is chief security and trust officer, VP of engineering, at Delphix