Home » Spin.AI Blog » SaaS Backup and Recovery » Backup Strategy Best Practices for On-Premise and Cloud
December 13, 2018 | Updated on: March 27, 2024 | Reading time 19 minutes

Backup Strategy Best Practices for On-Premise and Cloud

The heart of data protection, both in the enterprise and in the cloud, is backups. No matter how many mechanisms are in place for high availability of infrastructure as well as security mechanisms, backing data up is critically important. Backups allow organizations to protect their most valuable asset against all types of events or disaster recovery scenarios. There are various types of backup technologies and use cases. These come into play when architecting data protection solutions for both on-premises and cloud

Additionally, there are various backup strategy best practices, methodologies and considerations that come into play when thinking about the design of these backup architectures. In this post, we will take a look at the various aspects of engineering backup solutions in various environments. Also, we will look at best practices when it comes to data protection, both on-premises and in the cloud.

Best Practice Backup Methodologies

In thinking about cloud data backup best practices when it comes to backup methodologies, there is an industry-wide backup best practice methodology know as the 3-2-1 rule that helps to create resiliency and redundancy from a backup perspective. What is the 3-2-1 backup methodology that is used as a backup best practice?

The 3-2-1 rule recommends having (3) backups stored on (2) different kinds of media, with at least (1) copy stored off-site. This rule helps to create both redundancy in backups as well as diversity in storage locations and media, which are data backup best practices.

Why is this important? When we think about a disaster that may affect an entire site such as a natural disaster, you don’t want to have your only copy of your production backups stored at that site to only be stored at the same site as production. Why?

Well, the same disaster that may take down or destroy the production equipment and workloads would have equal risk of destroying the backups of the data as well. The results of this could be catastrophic! By diversifying both the location and backup media types on which backup data is stored, you greatly reduce the risk of losing all copies of your backup data.

Backups of cloud environments are very similar when considering the advantages of separating your production data from backup data. You want to diversify the locations in which these are stored.

Public cloud environments such as Google’s G Suite environment and Microsoft’s Office 365, already have impressive mechanisms in place that may replicate your data behind the scenes between various nodes, pods, or other mechanisms that help to protect your data. It may be much less likely that you would lose your data stored in a public cloud environment due to the same reasons as an on-premises disaster.

However, the principles of diversifying and spreading out your backup data and distancing it from production, still applies. What do we mean?

A case in point may be if you have your production workloads, data, email, etc, stored in a Google G Suite environment, you may not want to keep your backup data also in Google’s infrastructure.

If a public cloud vendor suffers an outage or interruption across the landscape of their particular infrastructure, both your production data as well as your backup data may be inaccessible. Having cloud diversity between production and backup data can be a course of wisdom and holds true to the nature of the best practice methodology found in the 3-2-1 rule.

With that being said, using a backup solution that allows storing data outside the source public cloud environment would be a requirement. By splitting apart your production and backup data in this way, you essentially create separate fault domains for your data. What are fault domains?

Fault domains are generally described in the context of hardware, meaning a set of hardware that share a single point of failure. We can expand that definition in a logical sense as well. You can ensure your production data and backup data exist in separate fault domains, or single points of failure.

Different Backup Types, Terms and Use Cases

Let’s think about and explore the various types of backups and terminology that are commonly used in various solutions on-premises as well as in the public cloud. In considering backup types and additional backup terms/considerations, it will help to understand the critical components of forming an effective backup strategy in today’s hybrid environments. We will take a look at the following:

  • On-premises vs Cloud backup
  • Cloud-to-cloud backup – benefits
  • Snapshot backup and its use in backup technologies
  • Incremental Backup vs Full Backups
  • Versioning
  • RPO vs RTO
  • Backup Retention Policies

All of the above-listed terms and technology are important in the realm of data protection, focusing on backups. Understanding each of these items is important to getting a good overall picture of how today’s backup solutions need to be architected and engineered for various use cases. Developing an enterprise backup strategy that takes into account different backup types, terms, and use cases is essential for ensuring data is protected in the most effective way possible.

On-premises vs Cloud Backup

This comparison may be a bit obvious, however, it is still worth considering a few of the high points of each area and challenges that can be encountered with each. The traditional backup environment that has existed in the enterprise is on-premises infrastructure. When we think back years ago, there were physical servers and on-premises backup servers that allowed taking backups of files and folders that existed on file servers and other member servers that served out resources.

Years ago, backup technology was fairly crude, and took full backups of servers each time a backup was processed. Additionally, traditional backups back in the day were not good at interacting with applications.

As technology has progressed, on-premises backups have gotten much better in both efficiency and advanced features. On-premises backup technology today is able to interact with today’s modern virtualization platforms and perform application-aware backups that interact with technologies such as Microsoft’s volume shadow copy service. Linux backups are able to trigger pre and post script to allow application aware backups. This allows getting snapshots of technologies such as Microsoft SQL Server, Microsoft Exchange Server, and Microsoft Active Directory Domain Services in a way that is consistent.

It is critical to capture such applications as databases in a consistent state. Database driven applications are transaction based. If the backups taken are not transactionally consistent, the backup will be corrupt or require further triaging after a disaster recovery event and subsequent restore operation. This typically involves replaying log files in the case of Microsoft SQL Server as an example.

The lines between on-premises backups and cloud backups are getting blurred with today’s modern data protection solutions and hybrid infrastructure environments. Cloud backup is certainly a much newer requirement than the traditional on-premises backup as it has only been around at least as long as cloud technologies have existed for the past decade or so. True data protection of public cloud resources and services has been sorely lacking in most landscapes for quite some time.

Only in recent years have more powerful solutions come onto the scene. Public cloud vendors have been slow to provide native tooling and services that allow an integrated way to perform these necessary operations to protect data in cloud environments. Customers have also been slower to demand these types of tools in the landscape of public cloud. This can be attributed in part to the perception of cloud environments in general

Many have had the idea that public cloud and cloud environments are immune to the data protection concerns that exist on-premises. However, data corruption and data deletion concerns due to any number of reasons can and do happen in public cloud environments. Public cloud high-availability is often confused with data protection. While public cloud infrastructure may be tremendously resilient to availability concerns which may include hardware, connectivity, and other potential issues, data is still at risk in the cloud if not properly protected.

Cloud backups are much less understood by organizations who are coming from traditional on-premises environments, backup methodologies, and toolsets. Cloud backups may typically not involve using your own hardware such as a dedicated backup server with data protection software installed. It may involve third-party solutions that provide Backup-as-a-Service functionality.

The general lack of understanding of cloud backups among organizations looking to migrate or already in the process of migrating to the cloud, leads to data protection either not being implemented correctly or not implemented at all.

Generally speaking, most cloud providers have not provided the effective tools that organizations need to properly backup business-critical infrastructure running inside of public cloud infrastructure. This means organizations must choose capable third-party vendors that supply the integration, tools, and protection for the business-critical services that are depended upon in the public cloud environment. Key capabilities to consider when choosing a data protection solution for the public cloud:

  • Automatic backups multiple times a day
  • Effective Versioning
  • Secure backups – Encrypted both in-flight and at-rest
  • Can store backups outside of the environment being protected
    • Multi-cloud interoperability with storing data
  • Retention control – multiple or unlimited restore points

Cloud backups and data protection have become vitally important for organizations today who are utilizing cloud environments for a wide range of use cases and business-critical services. Cloud has matured into a core component of infrastructure in most environments. Considering the data protection needs that are created by cloud business-critical environments is required for architecting effective data protection for most environments.

Benefits of Cloud-to-cloud Backup

Another interesting topology when thinking about backing up data in the cloud is backing the data up from one cloud and targeting storage in another cloud. Let’s go back to the 3-2-1 backup methodology. Part of what defines a resilient backup configuration is having backups in multiple places and having backup diversity in the locations of those backups.

Much of what defines the “best practice” reason for this is that you don’t want to have all your “eggs in one basket” when it comes to both production data and backup data.

When you think about engineering backups of one public cloud and then also storing the backup data inside the same public cloud infrastructure, despite the ultra-resiliency of today’s public cloud networks, you are in a sense storing your backup data in the same fault domain as your production data, as described earlier. If the same public cloud provider suffers an outage of production data, your backup data would also be inaccessible as well.

Performing cloud to cloud backups that take data from a production environment and then storing the data inside another public cloud environment provides this data diversity in locating your data and separating out the fault domains for your production and backup data locations. Data separation is key to providing exponentially more redundancy for business-critical data and the backups of that business-critical data.

Snapshot Backup – Use in Backup Technologies

The term snapshot is used all over the landscape of data protection solutions. Snapshots can refer to different things depending on technologies being used and the context of where the term is used. Snapshot gained much popularity as a data protection term with the rise of virtualization. Virtualization technologies used by such vendors as VMware and Microsoft utilize technologies that allow creating a point-in-time “image” of virtual machines that allow easily rolling back to these specific points if needed.

Data protection solutions have long utilized these native mechanisms for taking effective backups of VM resources by temporarily redirect writes to the virtual machine disk to the snapshot virtual disk which essentially is a differencing disk. While current activity is redirected to the differencing snapshot disk, the data protection solution can effectively take a backup of the virtual disk being used. Once the backup is taken, the snapshot delta disk is deleted.

A snapshot in general can be a backup point-in-time that represents the state of the data and its contents at that particular time period. You can think of it much like taking a picture of your data at any given time, hence the analogy of a “snapshot”. The picture or representation of your data at that particular point, is contained within the snapshot. Cloud backups are generally described as taking place as snapshots of your data.

The snapshot again, is a representation of all your data at that particular point. Importantly as well, data can be restored back using the data contained in these “snapshots” in that the data can be returned to the way it looked in that “image” of your data. There are many different implementations of the term snapshot, but in most cases, it is a representation of the way your data looks at a specific point-in-time.

Incremental Backup vs Full Backup

Backup efficiency is hugely important when you are talking about a large amount of data and the given changes that may take place during a specific period of time. Generally speaking, there are two different types of backups that are widely used in today’s backup solutions – the full backup, and incremental backups. Full backups and incremental backups each have their role to play in the overall backup methodology. However, what are the differences in the two and potential use cases for each?

The full backup is what it sounds like. It is a full backup of all data contained in the backup source. The full backup is generally used the first time a source is backed up. This makes sense as this is the first backup of the data from the source environment and no other backups exist.

Now, things get interesting on each successive iteration of backup jobs in that most modern backup solutions have the ability to run incremental backups of the data in each subsequent run of the backup job. What this means is greatly improvement backup efficiency.

This is due to the fact that incremental backups only backup the data that has been changed since the last full backup. It would be extremely inefficient to backup gigabytes and even terabytes worth of data if only a few kilobytes of data has changed since the last full backup! Some data protection technologies such as those used in the virtualization world copy changes at the block level. This can allow for extremely granular and efficient backup copies of data that has changed since the last full backup.

This ability to provide incremental backups is extremely important when cloud environments are the source of the backup. Both storage and network traffic are even more of a premium in dealing with cloud than with on-premises environments. Since most cloud providers charge for data that is exiting the environment, performing cloud-to-cloud backups that are extremely efficient and only copy data that has changed since the last full backup will pay dividends in lowering the overall cost of both backup storage as well as network traffic costs.

Versioning

One of the important characteristics of backups that needs to be considered is versioning. Pertaining to backups, the value to being able to restore data is not only to restore the data the way it is now or perhaps even the same day, but also be able to restore an alternate version of the same file.

Consider the situation if an unwanted change is made to a file but those changes were not discovered until many days after. If you only have one version of a file contained in backups, most likely the backup will only have the most recent changes to the file and will not allow you to restore the file to the previous “version” of the file before the unwanted changes were made.

Being able to effectively restore multiple “versions” of your data is essential to being able to provide effective disaster recovery! To illustrate this even further, think about the following scenario. A ransomware variant slyly and silently encrypts the files contained on an on-premises workstation. The workstation is syncing those changes silently via Google Drive Sync to the Google G Suite public cloud.

The next scheduled backup run that is only keeping a single version of a file starts its next backup iteration. Now all the “changes” made by the ransomware malware are backed up as if they are legitimate changes in the next incremental backup! A true disaster indeed! You now have not just a single file change, but perhaps ALL data has been changed with ransomware encryption and synchronized to your public cloud environment.

With effective versioning and multiple copies of the changes made to your data, you have an effective way to counteract the damaging changes made to the data by ransomware.

RPO vs RTO

RPO or Restore Point Objective

There are two terms that are often referenced when it comes to data protection solutions – RPO and RTO. These terms are often misunderstood or not understood at all when it comes to protecting and restoring data. The term RPO stands for Restore Point Objective.

The easiest way to understand RPO is to think of it as the amount of data that a business can stand to lose and continue operating. As an example, if you are backing up your environment every hour, the RPO for your environment would be 1 hour. In other words, you are willing to lose 1 hours’ worth of data.

If your backup runs and 59 minutes later the system or storage crashes, you would theoretically lose 59 minute’s worth of changes. The next backup would not have kicked off. For each organization, the RPO will most likely be different. For some, RPO values might be 1 day while others may need RPO values as low as minutes. Depending on the criticality of the data involved, the RPO value will be set.

RTO or Restore Time Objective

The RTO or Restore Time Objective is the amount of time it will take to get the RPO restored. In other words, the RTO defines how long the business can stand to be without the data being available. For most, having the RTO as low as possible is the desired value. The RTO must be considered because it weighs into the overall disaster recovery/business continuity plan to determine how long business processes will be impacted from a data loss event.

Both the RPO and the RTO values are extremely important considerations when thinking about disaster recovery and are terms that will no doubt come to light when formulating the disaster recovery/business continuity plan for your organization.

Backup Retention Policies

Retention generally comes into play when thinking about data protection solutions. Retention determines the amount of “versioning” or restore points that are kept on hand. As covered earlier, the versioning depends on restore points that have been cataloged and stored that are potential restore points in the event that data is changed inadvertently or deleted altogether.

Most organizations have what is called a retention policy or an automatic, system-controlled process that prunes off restore points after a certain amount of time. This could be as low as a few days or as long as a year or more. Each business use case is different and the retention policies will be configured to align with each specific business need. What factors may affect the retention policy?

There are various reasons and factors that may weigh into the retention policy configuration. These include compliance regulations that may determine how much information is kept on hand at any given time. The reality of “backups” is the data contained in the backup is production data. If there is compliance regulations that determines how much history or data can be kept, backups are part of this consideration.

Additionally, the more restore points you keep, the more storage it takes to keep those restore points. Especially when thinking about cloud-to-cloud backups, cloud environments are generally billed based on usage. There will most likely be a business decision that is made to determine the number of restore points that are retained vs the cost of keeping those restore points.

Efficient, Effective Use of Cloud-to-cloud Backup

All of the terms and methodologies discussed are critically important to engineering a data protection solution that is effective, efficient, and provides the modern capabilities needed in today’s hybrid environments. Spinbackup is a solution that allows organizations to successfully implement the recommended best practices in providing an effective and extremely capable cloud backup platform.

Spinbackup provides businesses today with the options needed to align their individual business use cases with their data protection strategies. By providing organizations with the choice of where data is stored when backed up from various public cloud environments, the business can make the choice that is right for their business objectives.

Spinbackup is one of the only data protection solutions that allows businesses who are backing data up in Google’s G Suite or Microsoft Office 365 environments to store those backups outside of the environment being protected. This supports true backup best practices methodology by allowing organizations to separate out backups from production data.

Additionally, the wide range of capabilities afforded by the Spinbackup data protection solution includes:

  • Automated, fully versioned backups
  • Multiple backups daily
  • Restore Point in Time restores of existing or deleting files
  • Easy migration of public cloud files between user accounts
  • True incremental backups – only changes are backed up and efficiently stored
  • Secure backups – encryption in-flight and at-rest
  • Alerting, reporting, and easy visibility through the Spinbackup Dashboard
  • Powerful Ransomware Protection that prevents data loss before it happens
  • Cybersecurity, powered by Machine Learning algorithms that watch environments 24×7

By both protecting and securing public cloud environments, Spinbackup is the only solution that encompasses all the needs of organizations today in a single solution that provides both backups and security!

Was this helpful?

Thanks for your feedback!
Avatar photo

Vice President of Product

About Author

Davit Asatryan is the Vice President of Product at Spin.AI

He is responsible for executing product strategy by overseeing the entire product lifecycle, with a focus on developing cutting-edge solutions to address the evolving landscape of cybersecurity threats.

He has been with the company for over 5 years and specializes in SaaS Security, helping organizations battle Shadow IT, ransomware, and data leak issues.

Prior to joining Spin.AI, Davit gained experience by working in fintech startups and also received his Bachelor’s degree from UC Berkeley. In his spare time, Davit enjoys traveling, playing soccer and tennis with his friends, and watching sports of any kind.


Featured Work:
Webinar:

How Can You Maximize SaaS Security Benefits?

Let's get started with a live demo

Latest blog posts

How to Restore A Backup From Google Drive: A Step-by-Step Guide

Backing up your Google Drive is like making a safety net for the digital part... Read more

Data Loss Prevention in Salesforce for Businesses

In this article, we discuss data loss prevention in Salesforce. We review the main types... Read more

Steps to Test Your Disaster Recovery Plan Effectively

Steps to Test Your Disaster Recovery Plan Effectively

A Disaster Recovery Plan is an efficient tool that can help mitigate risks and decrease... Read more