Cloud Backup: Off-Site Storage, BaaS, Tiers, Immutability

Cloud Backup

Backup copies of your data stored on remote cloud infrastructure rather than local disks or tape. Cloud backup is the modern way to satisfy the offsite requirement of the 3-2-1 rule: backups live in provider data centers physically separated from the source environment. The major destinations are AWS S3, Azure Blob Storage, and Google Cloud Storage, each offering tiered storage from hot (millisecond access, higher cost) to deep archive (12-48 hour retrieval, lowest cost). Architectures range from customer-managed software writing to cloud buckets (Veeam, Acronis, Backblaze) to fully managed Backup-as-a-Service (Druva, Cohesity, Iron Mountain). Critical modern features include immutable storage variants (S3 Object Lock, Azure immutable blobs) that resist ransomware by preventing modification or deletion until retention expires.

Reference content reviewed by recovery engineers. Editorial standards. About the authors.
📚
10 sources
AWS · Cohesity · Iron Mountain
Veeam · Barracuda · TechTarget
📅
11 nines
99.999999999% durability
S3 Glacier across multiple AZs
📅
Last updated
S3 · Azure Blob · GCP · Immutable
📖
9 min
Reading time

Cloud backup is both a process and a service: production data replicated to remote cloud infrastructure for protection against loss, corruption, ransomware, and disasters. The Cohesity reference describes it: organizations choose backup in the cloud to keep their data safe and accessible in case of a hardware or software failure, outage, natural disaster, or cyberattack such as ransomware. Cloud backup naturally satisfies the offsite component of the 3-2-1 backup rule. Modern cloud backup includes storage tier optimization (Hot/Cool/Cold/Archive), immutable storage for ransomware resistance, and BaaS managed offerings. The shared responsibility model means customers remain responsible for protecting their SaaS data; Microsoft and Google explicitly state they do not back up your data.

What Cloud Backup Is

The Cohesity cloud backup reference defines the concept: “Cloud backup is both a process and a service in which production data is replicated and stored in the cloud to protect it from loss or corruption. Organizations choose backup in the cloud to keep their data safe and accessible in case of a hardware or software failure, outage, natural disaster, or cyberattack such as ransomware. Because all backup software and infrastructure is hosted and managed by the provider, cloud backup is also known as backup as a service (BaaS).”1

The fundamental concept

Cloud backup is defined by where the backup copy lives and what services manage it:

  • Where the data lives: in cloud provider data centers physically separated from the source environment.
  • What replicates: production data, databases, virtual machines, file shares, endpoint devices, SaaS application data.
  • How replication happens: backup agents on source systems, hypervisor integration, API-based for SaaS, network-attached storage replication. Most cloud backup uses incremental backup after the initial seed to minimize bandwidth.
  • Network transport: typically TLS-encrypted over public internet, sometimes via private connectivity (AWS Direct Connect, Azure ExpressRoute, Google Cloud Interconnect).
  • Storage at destination: object storage with multiple tiers, deduplication, encryption at rest, optional immutability.
  • Service tiers: from raw cloud object storage (customer-managed) to fully managed BaaS (provider handles everything).

Why cloud backup satisfies the 3-2-1 rule

The DataBank cloud backup reference describes the alignment: “Cloud backups remain safe even if your physical office is compromised by flood, fire, or theft. Automation: Scheduled, incremental backups eliminate manual processes and reduce human error. Scalability: Cloud capacity grows with your data needs, avoiding the cost of new hardware.”2 The 3-2-1 alignment:

  • 3 copies of data: production + local backup + cloud backup = 3 copies satisfied.
  • 2 different media types: local disk + cloud object storage = 2 different media satisfied.
  • 1 copy offsite: cloud is by definition offsite from production source.
  • Geographic separation: cloud regions provide protection against regional disasters.
  • Multiple availability zones: cloud providers replicate within regions for additional redundancy.
  • Modern 3-2-1-1-0 extension: add immutable copy and zero verified errors via cloud-native features.

The cloud-storage-vs-cloud-backup distinction

The Iron Mountain reference describes a critical confusion: “While cloud storage is useful for accessibility, a dedicated cloud backup service is essential for business continuity. It provides the mechanisms needed for a full recovery after a data loss event.”3 Key differences:

  • Cloud storage (Dropbox, Google Drive, OneDrive): active synchronized working copies; if local is corrupted, cloud reflects that change.
  • Cloud backup: historical recovery points retained on schedule; original states preserved despite changes to source.
  • Versioning depth: cloud storage typically retains 30-90 days; cloud backup retains weeks, months, or years per policy.
  • Restoration model: cloud storage is file-by-file retrieval; cloud backup supports bare-metal recovery, mass restore, point-in-time database restore.
  • Immutability: cloud backup typically supports WORM retention; cloud storage typically does not.
  • Cost structure: cloud storage is per-GB ongoing; cloud backup includes retention pricing, retrieval fees, and request-count fees.

Common use cases

Cloud backup addresses several specific data protection requirements:

  • Off-site backup component of 3-2-1: the most common reason; cloud replaces tape rotation for offsite copy.
  • Endpoint backup: laptops and workstations backed up to cloud regardless of location.
  • SaaS data protection: backing up Microsoft 365, Google Workspace, Salesforce against the shared responsibility gap.
  • Cloud-to-cloud backup: backing up cloud-resident workloads to a different cloud or different account/tenant.
  • Long-term retention: compliance archives stored in deep-archive tiers at minimal cost.
  • Disaster recovery: DRaaS (Disaster Recovery as a Service) extends backup with cloud-based VM spin-up.
  • Tape replacement: cloud archive tiers replace physical tape for organizations consolidating storage.

Storage Tiers and Retrieval Economics

Cloud providers offer multiple storage tiers optimized for different access patterns. Understanding the tier structure is essential because it directly determines cost, retrieval time, and retrieval fees during recovery operations.

AWS S3 storage classes

The AWS Glacier reference describes the tier structure: “You can choose from three archive storage classes optimized for different access patterns and storage duration. For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval, with retrieval in minutes or free bulk retrievals in 5-12 hours.”4

Storage ClassCost (per GB/mo)Retrieval TimeUse Case
S3 Standard$0.023MillisecondsFrequent access, hot data
S3 Standard-IA$0.0125MillisecondsInfrequent access, 30-day minimum
S3 Intelligent-TieringVariableMillisecondsAuto-tiering by access pattern
S3 Glacier Instant Retrieval$0.004MillisecondsArchive with quick access
S3 Glacier Flexible Retrieval$0.00361 min – 12 hoursBackup, DR (90-day minimum)
S3 Glacier Deep Archive$0.0009912 – 48 hoursCompliance, long-term (180-day minimum)

S3 Glacier retrieval tiers

The AWS S3 Glacier documentation describes the retrieval options: “The following retrieval tiers are available for S3 Glacier Flexible Retrieval: Expedited retrieval (typically restores the object in 1-5 minutes); Standard retrieval (typically restores the object in 3-5 hours); Bulk retrieval (typically restores the object within 5-12 hours; bulk retrievals are free).”5 Pricing implications from the n2ws reference:

  • Expedited retrieval: $0.03 per GB plus $10 per 1,000 requests; fastest but most expensive.
  • Standard retrieval: $0.01 per GB; typical for moderate-urgency restores.
  • Bulk retrieval: free for Glacier Flexible Retrieval; preferred for large planned restores.
  • Glacier Deep Archive standard: $0.02 per GB; bulk at $0.0025 per GB.
  • Minimum storage durations: 90 days for Glacier Flexible, 180 days for Deep Archive; early deletion incurs charges.

Azure Blob storage tiers

Microsoft Azure provides parallel tier structure for Blob Storage:

  • Hot tier: frequently accessed data; highest storage cost, lowest access cost.
  • Cool tier: infrequently accessed data; 30-day minimum storage; lower storage cost than Hot.
  • Cold tier: rarely accessed data; 90-day minimum; introduced in 2023 between Cool and Archive.
  • Archive tier: long-term retention; 180-day minimum; data must be rehydrated before access (1-15 hours).
  • Lifecycle management: automatic tier transitions based on age or access patterns.
  • Geo-redundancy: LRS (locally redundant), ZRS (zone-redundant), GRS (geo-redundant), RA-GRS (read-access geo-redundant).

Google Cloud Storage tiers

Google Cloud Storage offers four tiers similar in structure to AWS and Azure:

  • Standard: high-performance frequent access tier.
  • Nearline: 30-day minimum; suitable for once-per-month access.
  • Coldline: 90-day minimum; suitable for once-per-quarter access.
  • Archive: 365-day minimum; lowest cost; data accessed less than once per year.
  • Multi-region and dual-region: options for additional geographic redundancy.
  • Object Lifecycle Management: automatic transitions and deletion based on age.

Durability and availability

Cloud backup tiers provide industry-standard durability metrics:

  • 11 nines durability: AWS S3 (99.999999999%), Azure Blob, Google Cloud Storage all advertise 11-nines durability.
  • What durability means: the probability of any given object being lost in a year; 11 nines means losing 1 object out of 100 billion per year.
  • Multiple AZ replication: data redundantly stored across multiple physically separated availability zones within a region.
  • Cross-region replication: optional asynchronous replication to additional regions for disaster recovery.
  • Availability SLAs: typically 99.9% to 99.99% per tier; lower for archive tiers due to retrieval delays.
  • Caveat for One Zone storage: single-AZ tiers (S3 One Zone-IA) sacrifice durability for cost; not recommended for backup.

Lifecycle policies and cost optimization

Lifecycle policies automatically transition backup objects between tiers based on age:

  • Typical pattern: recent backups in Standard or Hot for fast restore; backups 30-90 days old in IA/Cool; 90+ days old in Glacier/Archive.
  • Retention to archive: compliance backups transition to Deep Archive after 1 year for 7-year retention at minimal cost; see backup vs archive for the strategic distinction.
  • Lifecycle automation: AWS S3 Lifecycle, Azure Lifecycle Management, Google Object Lifecycle.
  • Object size considerations: small objects can incur metadata overhead exceeding storage cost; aggregation may be needed.
  • Cost reduction example: the n2ws reference notes shifting backups from Standard to Glacier can cut storage costs by over 85%.

Cloud Backup Architectures

Cloud backup is implemented through several distinct architectural patterns serving different operational and management preferences.

Customer-managed cloud backup

The traditional model: customer operates backup software locally and writes to cloud object storage:

  • Backup software runs on-premises: Veeam Backup and Replication, Veritas NetBackup, Commvault, Acronis Cyber Protect.
  • Cloud destination: AWS S3, Azure Blob, Google Cloud Storage, Wasabi, Backblaze B2 as repository targets.
  • Direct cloud writes: backup software writes deduplicated, compressed, encrypted data to cloud buckets.
  • Local cache option: hybrid configurations keep recent backups on-premises with older data tiered to cloud.
  • Customer responsibility: backup operations, scheduling, retention policies, restore procedures, capacity planning.
  • Cloud responsibility: storage durability, availability, encryption at rest, regional redundancy.

Backup-as-a-Service (BaaS)

Fully managed cloud backup where the provider handles all backup infrastructure:

  • Provider-managed infrastructure: no on-premises backup servers; everything runs in provider’s cloud.
  • Agent-based backup: lightweight agents on customer systems push data directly to provider cloud.
  • Major BaaS providers: Druva (SaaS-native), Cohesity DataProtect, Rubrik Security Cloud, Iron Mountain Iron Cloud Data Protection.
  • Per-protected-resource pricing: typical billing by VM, endpoint, user, or per-GB protected.
  • SLA-based delivery: RTO and RPO commitments included in service agreement.
  • Customer responsibility: agent deployment, retention policy selection, restoration testing.

Cloud-native backup services

Native services within each cloud platform for backing up cloud-resident workloads:

  • AWS Backup: centralized backup across EC2, EBS, RDS, DynamoDB, EFS, FSx, S3, Storage Gateway with policy-based management.
  • Azure Backup: Recovery Services vaults for VM, file share, SQL, SAP HANA, on-premises System Center DPM integration.
  • Google Cloud Backup and DR: backup for Compute Engine VMs, Cloud SQL, BigQuery; integration with on-premises via agents.
  • Cross-region replication: all native services support replicating backups to additional regions for DR.
  • Cross-account isolation: backups stored in separate AWS accounts or Azure subscriptions for ransomware-resistant separation.
  • Cost integration: billed through native cloud account; consolidated invoicing.

SaaS application backup

Specialized cloud backup for SaaS data not protected by the SaaS provider:

  • Microsoft 365 backup: Veeam Backup for Microsoft 365, Druva inSync for M365, AvePoint Cloud Backup, Datto SaaS Protection.
  • Google Workspace backup: Veeam Backup for Google Workspace, Spanning, AvePoint, Datto SaaS Protection.
  • Salesforce backup: OwnBackup (now Own Company), Spanning Salesforce, Druva for Salesforce.
  • API-based extraction: backups read SaaS data through provider APIs; no agents installed.
  • Storage in separate cloud: backups stored in independent cloud (often different provider) for true separation.
  • Granular restoration: individual emails, files, calendar events, contacts, Salesforce records can be restored.

DRaaS (Disaster Recovery as a Service)

Cloud backup extended with disaster recovery capabilities including VM spin-up:

  • Continuous replication: low-RPO replication of production VMs to provider cloud.
  • VM spin-up capability: failover VMs in provider cloud during disaster.
  • Network configuration: includes IP, DNS, VPN, and connectivity setup for failover environment.
  • Failback support: reverse replication from cloud to recovered on-premises after primary recovery.
  • Major DRaaS providers: Zerto, Datto SIRIS, VMware Site Recovery, Azure Site Recovery, AWS Elastic Disaster Recovery.
  • Test failover: non-disruptive testing of failover procedures, typically scheduled quarterly.

Bandwidth seeding for initial backup

Initial seeding for large data sets uses physical media transfer when network upload is impractical:

  • AWS Snowball: 50 TB and 80 TB physical devices shipped to customer for offline initial seed.
  • AWS Snowmobile: 100 PB shipping container for extreme-scale data center migration.
  • Azure Data Box: family of devices from 8 TB Disk to 1 PB Heavy for offline transfer.
  • Google Transfer Appliance: 40 TB and 300 TB physical devices for initial cloud upload.
  • Process: customer fills device, ships to provider, data uploaded to cloud account, ongoing backups via internet.
  • When seeding makes sense: when initial backup would take longer than physical shipping plus upload (typically 100+ TB).

Immutability and Ransomware Protection

Immutable cloud backup is the modern defense against ransomware attacks that target backup repositories. Without immutability, backups can be encrypted or deleted by attackers along with production data.

What immutable cloud backup means

The TechTarget off-site backup reference describes the mechanism: “Some Backup-as-a-Service providers offer immutable backups in their cloud service-level agreements. This means that once a backup is written, it cannot be altered or deleted for a defined retention period. Immutable storage in the cloud can mimic the experience of storing a backup copy offline in a secure location.”6 Properties:

  • Write-once-read-many (WORM): data can be written and read but not modified or deleted.
  • Time-based retention: immutability period expires after configured duration (days, months, years).
  • Cannot be overridden: in compliance mode, even root account or administrator cannot delete.
  • Logical air gap: simulates the protection of physical air-gapped backup tapes.
  • Foundation of 3-2-1-1-0: the additional “1 immutable” copy in modernized rule.
  • Required by ransomware insurance: increasingly mandated for cyber insurance coverage.

Implementation mechanisms

Major cloud providers implement immutability with different mechanisms and modes:

  • AWS S3 Object Lock – Governance mode: users with specific permissions can override the lock; suitable for short-term protection.
  • AWS S3 Object Lock – Compliance mode: no one (including root) can delete or modify; suitable for compliance requirements.
  • Azure immutable blob storage: time-based retention policies (locked or unlocked) and legal holds for indefinite retention.
  • Google Cloud Bucket Lock: retention policy enforced at bucket level; cannot be reduced once locked.
  • Wasabi compliance immutability: built-in compliance-level immutability without separate object lock configuration.
  • Backblaze B2 Object Lock: S3-compatible Object Lock for B2 cloud storage.

Iron Mountain immutable cloud framing

The Iron Mountain reference describes the protection mechanism: “Our service allows customers to store a clean copy of data offsite, making it inaccessible and invisible to malware targeting the production environment. The system uses immutable storage, meaning that once data is written, it cannot be altered or deleted for a set period, which protects backup integrity from malicious encryption attempts. In the event of an attack, your business can restore operations from the untouched, air-gapped data copy.”7

Why credentials matter in cloud immutability

Even with immutable storage, attacker-compromised cloud credentials can defeat protection if not properly architected:

  • Credential compromise risk: stolen access keys or compromised IAM roles can delete unprotected backups.
  • Why governance mode fails: if attackers gain administrator credentials, governance-mode locks can be overridden.
  • Compliance mode requirement: for ransomware-resistant backups, compliance mode is essential.
  • Separate accounts/tenants: backup destinations should be in separate cloud accounts with isolated authentication.
  • MFA delete protection: require multi-factor authentication for deletion operations in S3 buckets.
  • Backup vault concept: AWS Backup Vault, Azure Backup Vault, Google Backup vault provide additional access control.

Modern 3-2-1-1-0 rule

The Veeam managed offsite backup reference describes the modernized rule: “Free up resources and meet the 3-2-1-1-0 rule by offloading data protection to the experts.”8 The components:

  • 3 copies of data: production plus 2 backup copies.
  • 2 different media types: diversity of storage technology.
  • 1 copy offsite: geographic separation from production.
  • 1 copy immutable: WORM-protected against modification.
  • 0 errors: verified through restoration testing and integrity checks via hash verification.
  • Cloud as natural fit: cloud backup with immutability satisfies the offsite + immutable + verified requirements.

The SaaS Shared Responsibility Gap

One of the most-misunderstood aspects of cloud backup is the shared responsibility model for SaaS applications. Microsoft 365, Google Workspace, Salesforce, and similar SaaS platforms explicitly state that customers are responsible for protecting their own data.

The Barracuda canonical framing

The Barracuda Networks World Backup Day reference describes the gap: “When production data lives in a SaaS platform like Microsoft 365, the idea of ‘offsite’ can be misleading. The data already lives in someone else’s data center, outside the physical risks of the business location. SaaS providers deliver redundancy and availability to keep services running, but those controls have nothing to do with data backup. Microsoft is explicit about this shared responsibility model: customers own their data and identities and are responsible for protecting them. Retention policies and recycle bins offer limited, short-term recovery, not true backup or disaster recovery.”9

What Microsoft and Google provide

SaaS providers offer infrastructure availability and limited recovery, not true backup:

  • Infrastructure SLA: 99.9% availability for service uptime; not a data protection guarantee.
  • Recycle bin (M365): 30-93 days deleted item retention depending on configuration.
  • Recycle bin (Google Workspace): 30 days for most data; permanent deletion after.
  • Retention policies: compliance-focused retention; not point-in-time backup recovery.
  • Litigation hold: legal hold for in-place preservation; specific to compliance scenarios.
  • Versioning (limited): version history for files; not complete account-level backup.

What Microsoft and Google do NOT provide

Critical data protection scenarios fall outside SaaS provider responsibility:

  • Long-term retention beyond recycle bin: data deleted past 30-93 days is gone permanently.
  • Ransomware recovery: if attacker compromises account and encrypts files, SaaS does not provide rollback.
  • Malicious admin actions: compromised admin credentials enable widespread data destruction.
  • Granular point-in-time restore: recovering account state to specific moment is not native capability.
  • Departed user data: when employee leaves and license is removed, data is permanently deleted.
  • Cross-tenant migration: moving data between tenants requires backup and restore.

Same-cloud backup limitation

The Barracuda reference describes a critical architectural concern: “You could create backups of your data in the same cloud environment, such as an Azure-hosted backup for Azure-hosted production data. This gives you a second copy of the data, but it might not provide meaningful separation. A tenant-level compromise, credential theft or cloud-wide incident can affect both your production copy and the backup.” The implications:

  • Cross-cloud separation: backups should reside in different cloud or different account/tenant.
  • Independent authentication: backup destination should require separate credentials from production.
  • Administrative isolation: backup admins should be separate from production admins.
  • Network isolation: backup traffic should not depend on production network availability.
  • Provider risk: using single cloud creates dependency on that provider’s availability.

SaaS backup solutions

Dedicated SaaS backup solutions address the shared responsibility gap:

  • Microsoft 365 backup: Veeam Backup for M365, Druva inSync for M365, AvePoint, Datto SaaS Protection, Spanning.
  • Google Workspace backup: Veeam Backup for Google Workspace, Spanning, AvePoint, Datto SaaS Protection.
  • Salesforce backup: Own Company (formerly OwnBackup), Spanning Salesforce, Druva for Salesforce.
  • Multi-SaaS platforms: Druva, Cohesity DataProtect, AvePoint cover multiple SaaS applications.
  • Coverage scope: typically Exchange/Gmail mailboxes, OneDrive/Drive files, SharePoint sites, Teams chats, Calendar.
  • Restoration capability: granular item-level (single email, single file) through full account restoration.

Cloud backup has become the default offsite component of modern data protection, replacing tape rotation in many environments and providing the foundation for ransomware-resistant backup architectures through immutable storage. For data recovery purposes, the practical implication is that cloud backup combines storage scalability, geographic separation, and immutability features that are difficult to achieve with on-premises infrastructure alone, but the storage tier choices, retrieval costs, and shared responsibility model require deliberate design to avoid expensive surprises during actual recovery operations. The most-common cloud backup mistakes are using wrong storage tier (Glacier Deep Archive for backups needing rapid restore), assuming SaaS provider provides backup (Microsoft 365 explicitly does not), and architecting backup in same account as production (defeats the offsite property).

For users facing recovery scenarios involving cloud backup, the practical guidance follows the architecture. If the cloud backup is in immutable storage and restoration is straightforward, retrieval is typically fast and reliable; budget for expedited retrieval costs if RTO requires it. If the backup is in deep archive tiers, expect 12-48 hour restoration delays plus retrieval fees; this is expected behavior, not a problem. If the backup is in same-cloud same-account as production and that account was compromised, recovery options narrow to whatever immutable copies exist; cross-cloud or cross-account separation matters enormously here. Standard data recovery software applies when both source and cloud backup have failed; this is rare with well-architected cloud backup but happens with poorly-isolated configurations. HDD-focused recovery tools address local backup target failures separately from cloud failures. The strongest cloud backup posture uses immutable storage with cross-cloud or cross-account separation, satisfies the modern 3-2-1-1-0 rule, includes regular restoration testing, and matches storage tier to actual recovery time requirements.

Cloud Backup FAQ

What is cloud backup?+

Cloud backup is the process and service of replicating production data to remote cloud infrastructure for protection against loss, corruption, ransomware, and disaster scenarios. The Cohesity cloud backup reference defines it: cloud backup is both a process and a service in which production data is replicated and stored in the cloud to protect it from loss or corruption; organizations choose backup in the cloud to keep their data safe and accessible in case of a hardware or software failure, outage, natural disaster, or cyberattack such as ransomware. Cloud backup naturally satisfies the offsite requirement of the 3-2-1 backup rule because the data resides in cloud provider data centers physically separated from the source environment. Cloud backup encompasses several architectural patterns: customer-managed backup software writing to cloud object storage (Veeam, Acronis, Backblaze, Carbonite, IDrive); Backup-as-a-Service (BaaS) where the provider manages the entire backup infrastructure (Druva, Cohesity DataProtect, Iron Mountain Iron Cloud); SaaS application backup specifically for Microsoft 365 and Google Workspace; and Disaster-Recovery-as-a-Service (DRaaS) extending backup with cloud-based VM spin-up. Cloud backup is increasingly viewed as the default offsite component of modern data protection, replacing tape rotation in many environments while complementing local backup appliances for hybrid architectures.

What are AWS S3 storage tiers and how do they apply to backup?+

AWS S3 provides multiple storage classes with different cost-and-retrieval trade-offs designed for different backup access patterns. S3 Standard ($0.023 per GB per month) is for frequently accessed data with millisecond retrieval; S3 Standard-Infrequent Access ($0.0125 per GB per month) is for less-frequent access with millisecond retrieval and 30-day minimum storage; S3 Intelligent-Tiering automatically moves objects between access tiers based on observed access patterns; S3 Glacier Instant Retrieval is for archive data needing millisecond access (such as medical images), accessed once per quarter typically; S3 Glacier Flexible Retrieval ($0.0036 per GB per month, 90-day minimum storage) is the standard tier for backup and disaster recovery, with retrieval options of expedited (1-5 minutes), standard (3-5 hours), and free bulk (5-12 hours); S3 Glacier Deep Archive ($0.00099 per GB per month, 180-day minimum storage) is the lowest-cost option for compliance archives accessed less than once per year, with 12-48 hour retrieval times. The AWS reference notes: S3 Glacier Deep Archive delivers the lowest cost storage in the cloud with data retrieval within twelve hours; data is redundantly stored across multiple physically separated AWS Availability Zones with 99.999999999% (11 9s) durability. Lifecycle policies can automatically transition backup objects between tiers based on age, dramatically reducing costs as backups age into cold tiers.

Is cloud storage the same as cloud backup?+

No. Cloud storage and cloud backup are distinct concepts often confused. The Iron Mountain reference describes the distinction: while cloud storage is useful for accessibility, a dedicated cloud backup service is essential for business continuity; it provides the mechanisms needed for a full recovery after a data loss event. Cloud storage (Dropbox, Google Drive, OneDrive) is primarily for active file sharing, collaboration, and accessibility; the cloud copy is typically the working copy synchronized with local devices. If a file is deleted, encrypted by ransomware, or corrupted in cloud storage, the change synchronizes to all connected devices and the previous state may be unrecoverable beyond limited version history (typically 30-90 days). Cloud backup is purpose-built for data protection: backups are typically immutable for retention period; multiple recovery points exist (daily, weekly, monthly retention); ransomware-resistant features (object versioning, immutable storage, isolated authentication) protect against credential compromise; restoration tools support full system recovery, not just file retrieval. The Quest Systems reference describes the additional consideration: cloud storage is often mistaken for offsite backup; while both involve storing data outside of your physical environment, there are critical distinctions; cloud backups are typically online and connected, meaning they can be accessed instantly but are also exposed to the same risks as the production environment. The most-effective cloud backup strategy uses immutable storage to provide protection against the threats that defeat cloud storage.

What is the Microsoft 365 shared responsibility model for backup?+

The Microsoft 365 shared responsibility model places data protection responsibility on the customer, not Microsoft, and is a critical consideration for cloud-first organizations. The Barracuda Networks World Backup Day reference describes the gap: when production data lives in a SaaS platform like Microsoft 365, the idea of offsite can be misleading; the data already lives in someone else’s data center, outside the physical risks of the business location; SaaS providers deliver redundancy and availability to keep services running, but those controls have nothing to do with data backup; Microsoft is explicit about this shared responsibility model: customers own their data and identities and are responsible for protecting them; retention policies and recycle bins offer limited, short-term recovery, not true backup or disaster recovery. Microsoft provides infrastructure availability (99.9% SLA), basic recycle bin recovery (typically 30-90 days), some retention policies, and litigation hold features; Microsoft does NOT provide: long-term backup retention against accidental deletion past retention period; protection against ransomware encrypting M365 data through compromised account; recovery from malicious administrator actions; granular point-in-time restoration. Dedicated M365 backup solutions (Veeam Backup for Microsoft 365, Druva inSync for M365, AvePoint, Datto SaaS Protection) provide the missing protection by maintaining independent backup copies in separate cloud or on-premises infrastructure. The same shared responsibility model applies to Google Workspace, Salesforce, and other SaaS platforms.

How does immutable cloud backup protect against ransomware?+

Immutable cloud backup uses write-once-read-many (WORM) storage that cannot be modified or deleted until a configured retention period expires, providing protection against ransomware attacks that target backup repositories. The TechTarget off-site backup reference describes the mechanism: some Backup-as-a-Service providers offer immutable backups in their cloud service-level agreements; this means that once a backup is written, it cannot be altered or deleted for a defined retention period; immutable storage in the cloud can mimic the experience of storing a backup copy offline in a secure location. Implementation mechanisms: AWS S3 Object Lock provides governance mode (specific permissions can override) or compliance mode (cannot be removed even by root account) for configured retention period; Azure immutable blob storage provides time-based retention policies and legal holds; Google Cloud Storage provides retention policies and bucket lock; Wasabi cloud storage offers compliance-level immutability. The Iron Mountain reference describes the protection: the system uses immutable storage, meaning that once data is written, it cannot be altered or deleted for a set period, which protects backup integrity from malicious encryption attempts; in the event of an attack, your business can restore operations from the untouched, air-gapped data copy. Immutable cloud backup is increasingly required for ransomware insurance coverage and is the foundation of the modernized 3-2-1-1-0 rule (3 copies, 2 media, 1 offsite, 1 immutable, 0 errors after verification).

What are the major cloud backup providers?+

Cloud backup providers fall into several categories serving different markets. Consumer and small-business cloud backup: Backblaze offers unlimited backup storage with versioning for personal and small business use; Carbonite provides backup for Windows and Mac with pricing based on number of systems; CrashPlan focuses on small businesses with unlimited storage and ransomware features; IDrive supports cross-platform backup including Windows, Mac, Linux, iOS, and Android with single-account multi-device coverage; Livedrive provides UK-based backup with EU compliance support. Enterprise on-premises with cloud backup integration: Veeam Backup and Replication writes to AWS S3, Azure Blob, Google Cloud Storage, and other object stores; Veritas NetBackup integrates with all major clouds; Commvault Complete Backup provides multi-cloud support; Acronis Cyber Protect combines backup with cybersecurity features. Backup-as-a-Service (fully managed): Druva offers SaaS-native backup for endpoints, servers, and SaaS applications; Cohesity DataProtect provides web-scale BaaS; Rubrik Security Cloud combines backup with cyber recovery; Iron Mountain Iron Cloud Data Protection offers managed backup with immutable storage. SaaS-specific backup: Veeam Backup for Microsoft 365, Druva inSync, AvePoint, Spanning, Datto SaaS Protection focus specifically on M365 and Google Workspace data. Cloud-native backup services: AWS Backup, Azure Backup, Google Cloud Backup and DR provide native backup orchestration within each cloud’s ecosystem.

Related glossary entries

  • 3-2-1 Backup Rule: cloud backup is the most-common modern way to satisfy the offsite requirement.
  • Backup vs Archive: cloud archive tiers (Glacier Deep Archive) handle the archive role; cloud Standard handles backup.
  • Incremental Backup: incremental is the bandwidth-efficient backup type for cloud destinations.
  • Differential Backup: less common for cloud due to bandwidth cost; incremental wins for cloud destinations.
  • Storage Snapshot: cloud platforms include native snapshot services for cloud-resident workloads.
  • Hash Verification: confirms cloud backup integrity at upload and restoration.
  • Hardware Encryption: cloud backup encrypts data at rest; customer-managed keys add an additional layer.

About the Authors

đŸ‘„ Researched & Reviewed By
Rachel Dawson
Rachel Dawson
Technical Approver · Data Recovery Engineer

Rachel brings over twelve years of data recovery engineering experience including substantial work with cloud backup recovery scenarios. The most common pattern is organizations that backed up M365 data using “Microsoft already backs it up” assumption and discovered after ransomware or accidental admin deletion that the recycle bin window expired before they noticed the loss. The universal advice on cloud backup: verify your immutability configuration, test restoration regularly, and never assume the SaaS provider is backing up your data.

12+ years data recovery engineeringCloud backup recoveryM365 SaaS recovery
✅
Editorial Independence & Affiliate Disclosure

Data Recovery Fix earns revenue through affiliate links on some product recommendations. This does not influence our reference content. Glossary entries are written and reviewed independently based on documented research, vendor documentation, independent testing, and recovery-engineer review. If anything on this page looks inaccurate, outdated, or worth revisiting, please reach out at contact@datarecoveryfix.com and we’ll review it promptly.

We will be happy to hear your thoughts

Leave a reply

Data Recovery Fix: Reviews, Comparisons and Tutorials
Logo