Backup-in-depth is a relatively new concept that involves making a chained backup or synchronization of files in their native format, for example:
There are some intricacies in setting up this chain to make it more secure. One of the biggest drivers behind the backup-in-depth development is the spread of ransomware.
The most obvious answer is to backup data as often as possible to the place that is the easiest to access. That is to an external local hard drive or to local area network. Or even to simply keep user file on shared drives on LAN.
Up until recently the main purpose of the backup process was to provide defense against data deletion or corruption caused by unintentional user or software actions or a hardware failure. Much Less frequently - by a malicious program. So all efforts were made to preserve copies of important data, but not necessarily to restrict the access to the backup copies.
Recent ransomware implementations deemed both local external drives and LAN unsafe for backups. For example, CryptoWall encrypts user files on a computer's hard drive and any external drives, then scans the network for shared drives to which the user has access, and encrypts information on them as well.
Damage caused by ransomware attacks is not limited to the amount of compensation demanded by the perpetrators to release the decryption key. In fact, the losses resulting from the pause in business activities turn out to be orders of magnitude larger.
Several hospitals that have successfully been attacked by ransomware recently made the headlines. Healthcare organizations face particularly high stakes in dealing with ransomware because disruptions in service availability jeopardize the core mission of the organization. Once vital data is not available anymore, surgeries and appointments are delayed, lab results cannot be delivered or take longer, and ultimately patients have to be diverted to other facilities.
Sometimes the impact of a ransomware attack can extend well beyond the attack itself. A research conducted by the AC Group concerning downtime cost calculations of electronic healthcare records gives us an indication. This study assessed several cyberattack consequences, including the additional time spent to perform tasks manually and to update records after the systems were up again. The study established an average cost of system downtime of $488 per hour per physician.
There are two obvious alternatives to storing backup data on of LAN or external dries: an expensive backup system that stores copies of data on a custom server or sending copies of files to a storage location across the Internet using a secure FTP or WebDAV server.
If an FTP or WebDAV server are used to store backup copies of files, at least theoretically the malicious program can intercept access credentials from the backup program and connect directly to the FTP or WebDAV server to delete all files that these credentials allow it to delete.
In this scenario, once files are propagated to the Tier 2 Server, there will be no way for the virus to access them as the Tier 2 Server does not host a server. Instead, it uses a synchronization client to connect to the Tier 1 server.
Copying files across a FTP and WebDAV server has one disadvantage to copying them over LAN: security attributes of files cannot be transferred with the files.
GoodSync developers have overcome that limitation by introducing the GSTP file transfer protocol - a deep modification of the WebDAV protocol that copies security attributes with the files.
The system described in the figure above can be effectively achieved by the deployment of GoodSync clients to the Live System and Tier 2 server and a GoodSync File Server for Tier 1 server.
Please note that GoodSync has different client licenses for server and non-server operating systems.
Please contact us for more information and a live demo: