Disk Mirroring
Real-time replication of logical disk volumes onto separate physical drives to ensure continuous availability when one drive fails. Most commonly implemented as RAID 1 in hardware or software, but extends to filesystem-native mirroring (ZFS mirror vdevs, Btrfs RAID 1), network mirroring (DRBD between Linux hosts), and SAN-level synchronous replication (NetApp SyncMirror, Pure ActiveCluster, Dell PowerMax SRDF). Mirroring is typically synchronous: writes complete only after both copies have been written. The fundamental trade-off is 50% capacity penalty in exchange for continued operation through single-drive failure. Mirroring is high availability, NOT a backup: ransomware encryption, accidental deletion, and file corruption all propagate immediately to the mirror.
DRBD · Nfina · DiskInternals
Asynchronous (RPO gap)
RAID 1 · DRBD · SAN replication
Disk mirroring is the real-time replication of logical disk volumes onto separate physical drives. Most commonly RAID 1, but the technique extends to software mirroring (Linux mdadm, ZFS mirror), network mirroring (DRBD), and SAN-level synchronous replication. Mirroring is typically synchronous: both copies must be written before the application receives write acknowledgment. Provides continued operation through single-drive failure with a 50% capacity penalty. Critical: mirroring is NOT a backup. Writes propagate immediately to the mirror, including ransomware encryption, accidental deletion, and corruption. Effective protection combines mirroring (availability) with separate backup copies (data preservation).
What Disk Mirroring Is
The Wikipedia disk mirroring reference provides the canonical definition: “In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard disks in real time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored volume is a complete logical representation of separate volume copies.”1
The fundamental concept
Disk mirroring is defined by what it does and what it provides:
- What it does: writes the same data simultaneously to two or more physical drives (or two or more remote nodes).
- Real-time: mirroring happens at the moment of each write, not on a schedule.
- Continuous availability: if one drive or node fails, the system continues operating from the surviving copy.
- Identical copies: all mirror members contain the same data at all times (in synchronous mode).
- Transparent to applications: applications see a single logical volume; the mirroring is handled below the file system level.
- Read performance benefit: reads can be served from any mirror copy, allowing parallelization across drives.
The broad vs narrow usage
The TechTarget disk mirroring reference describes two common usages: “Disk mirroring, also known as RAID 1, is the replication of data across two or more disks. The term ‘disk mirroring’ is sometimes used in a broader sense to describe any type of disk replication, but in most cases, it is meant within the context of RAID 1.”2 The broader interpretation includes:
- Hardware RAID 1: dedicated controllers from LSI, Broadcom, Adaptec, Areca; transparent to the OS.
- Software RAID 1: Linux mdadm, Windows dynamic disk mirrors, macOS Disk Utility mirroring.
- Filesystem-native mirroring: ZFS mirror vdevs, Btrfs RAID 1 profiles with self-healing capability.
- Network mirroring: DRBD on Linux for synchronous replication between hosts.
- SAN replication: NetApp SyncMirror, Pure ActiveCluster, Dell PowerMax SRDF, HPE Peer Persistence.
- Cloud equivalents: AWS EBS Multi-AZ, Azure ZRS, Google Regional Persistent Disk.
The 50% capacity trade-off
The DiskInternals RAID mirroring reference describes the storage cost: “Mirroring takes much of your RAID’s total storage capacity, 50% of the RAID capacity actually. If you make a RAID 1 with 4 1TB SSDs, you may expect the RAID to offer you a total of 4 TB storage space, but no, it will offer you just 1 TB as the total storage.”3 The capacity math:
- 2-way mirror with 2 drives: usable capacity equals one drive’s capacity (50% penalty).
- 3-way mirror with 3 drives: usable capacity equals one drive’s capacity (66% penalty); tolerates 2 simultaneous failures.
- 4-way mirror with 4 drives: usable capacity equals one drive (75% penalty); typical only in extreme reliability scenarios.
- ZFS n-way mirrors: ZFS supports arbitrary mirror count; capacity equals smallest drive regardless of count.
- Hybrid arrangements: RAID 10 combines mirroring with striping; 4 drives mirrored in pairs then striped = 50% penalty with redundancy and performance.
Performance characteristics
Mirroring affects read and write performance differently:
- Read performance: can be parallelized across mirror copies; 2-way mirror potentially doubles read throughput.
- Write performance: must complete on all copies; total write time approximately equals slowest drive.
- Latency under failure: when a mirror member fails, write latency may spike during recovery operations.
- Resilvering impact: rebuilding a replaced drive consumes I/O bandwidth, affecting production performance.
- Cache effects: battery-backed write cache on hardware RAID controllers can mask the synchronous write penalty.
- SSD vs HDD: SSD mirrors have minimal write penalty; HDD mirrors have noticeable write impact.
Common use cases
Disk mirroring addresses specific availability requirements:
- System drive protection: mirror the OS drive so the system continues running through a drive failure.
- Database server availability: RAID 1 or RAID 10 for database files where downtime is expensive.
- Active-active cluster nodes: DRBD between cluster nodes for shared-nothing high availability.
- Geographic disaster recovery: SAN-level asynchronous mirroring to remote data center.
- Boot drive in workstations: motherboard RAID 1 for personal data drive in critical workstations.
- Storage array internal redundancy: SAN arrays use mirroring within and across drive shelves.
Synchronous vs Asynchronous Mirroring
Mirroring can be performed in different modes depending on the latency tolerance and consistency requirements. The choice has substantial implications for performance and recovery point objectives.
Synchronous mirroring
The DRBD documentation describes synchronous mirroring: “With synchronous mirroring, applications are notified of write completions after the writes have been carried out on all hosts.”4 Operational characteristics:
- Write acknowledgment: application receives success only after both copies are written.
- Consistency guarantee: both mirror members are always identical at every commit point.
- Recovery point objective: zero; no committed write is ever lost.
- Latency cost: write latency is constrained by the slowest replication path.
- Distance limitation: typically limited to single data center or metro distance (under 100 km) due to latency constraints.
- Standard for local mirrors: RAID 1, ZFS mirror, DRBD over LAN all use synchronous mode by default.
Asynchronous mirroring
The DRBD documentation contrasts asynchronous mode: “With asynchronous mirroring, applications are notified of write completions when the writes have completed locally, which usually is before they have propagated to the other hosts.” Asynchronous characteristics:
- Write acknowledgment: application receives success after local write only.
- Replication lag: remote copy is typically seconds to minutes behind primary.
- Recovery point objective: non-zero; recent writes may be lost if source fails before replication completes.
- Latency benefit: write latency unaffected by remote replication path.
- Distance flexibility: works across continents; suitable for geographic disaster recovery.
- Common in DR: NetApp SnapMirror, Dell SRDF/A, Pure async replication for cross-region.
Mode comparison
| Property | Synchronous | Asynchronous |
|---|---|---|
| Write acknowledgment | After both copies written | After local write |
| Consistency | Always identical | Replication lag |
| RPO | Zero | Seconds to minutes |
| Write latency | Constrained by network | Local only |
| Distance limit | Metro (under 100 km) | Cross-continental possible |
| Failure scenarios | Mirror lag during partial failure | Lost recent writes possible |
| Common use | Local HA, RAID 1, ZFS mirror | Geographic DR, cross-region |
| Performance impact | Higher (waits on remote) | Lower (immediate ack) |
Semi-synchronous and point-in-time
The Wikipedia mirroring reference identifies additional modes: “replication can be performed synchronously, asynchronously, semi-synchronously, or point-in-time.” These intermediate modes:
- Semi-synchronous: primary waits for acknowledgment that write reached secondary’s memory but not necessarily disk; balances consistency and latency.
- Point-in-time replication: periodic snapshots replicated rather than continuous streaming; less stringent consistency, simpler recovery.
- Cascading replication: primary syncs to secondary, secondary asyncs to tertiary; combines local synchronous with geographic asynchronous.
- Multi-target replication: primary syncs to multiple secondaries with different SLAs.
Mirroring Implementations
Disk mirroring is supported across hardware controllers, operating systems, file systems, network storage protocols, and cloud platforms. The following are the most-encountered implementations.
Hardware RAID 1
Dedicated RAID controllers manage mirroring at the storage adapter level:
- Vendors: LSI/Broadcom, Adaptec, Areca, HighPoint, Promise.
- Transparent to OS: operating system sees a single logical drive; mirroring is invisible.
- Battery-backed cache: performance optimization; survives power loss to commit cached writes.
- Configuration tools: vendor-specific BIOS utilities, web-based management, OS-level CLI tools.
- Common in servers: standard for enterprise servers requiring local redundancy.
- Recovery complexity: proprietary metadata format means recovery may require same-vendor controller.
Linux mdadm
The standard Linux software RAID tool:
- Open source: part of mainline kernel; no vendor lock-in.
- RAID levels supported: 0, 1, 4, 5, 6, 10, plus combinations.
- Creation syntax:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb - Metadata: stored on disk; portable across Linux systems with same kernel version.
- Recovery: supports forced assembly, repair, rebuild commands.
- Performance: uses CPU for RAID operations; modern CPUs make this negligible vs hardware RAID.
Windows dynamic disks and Storage Spaces
Microsoft provides multiple mirroring options:
- Dynamic disks: legacy software RAID using NTFS; supports mirrored volumes for boot and data.
- Storage Spaces: modern replacement; supports two-way mirror (RAID 1 equivalent), three-way mirror, and parity layouts.
- ReFS integration: Storage Spaces with ReFS provides integrity streams and self-healing on mirrored data.
- Storage Spaces Direct: hyperconverged solution mirroring across multiple servers in cluster.
- PowerShell management: New-StoragePool, New-VirtualDisk cmdlets for automation.
- Boot mirroring: Windows Server supports mirroring system drives via dynamic disks.
ZFS mirror vdevs
ZFS provides advanced mirror capabilities with self-healing:
- Mirror vdev: 2 or more drives mirrored as basic redundancy unit.
- Pool composition: multiple mirror vdevs striped together for capacity and performance.
- Self-healing: ZFS checksums every block; detects corruption on read; automatically repairs from healthy mirror copy.
- Resilvering: ZFS only rebuilds blocks that are actually used, dramatically faster than traditional RAID rebuild.
- N-way mirrors: arbitrary number of mirror copies; rare 3-way or 4-way for critical data.
- Removable mirror: can detach and reattach mirror copies for backup-style operations.
DRBD network mirroring
The DRBD reference describes the architecture: “DRBD is a software-based, shared-nothing, replicated storage solution mirroring the content of block devices (hard disks, partitions, logical volumes, and so on) between hosts.”5 The ipserverone reference summarizes: “DRBD mirrors a complete block device over a network, effectively creating a network-based RAID-1.” Properties:
- Block-level replication: mirrors raw block devices, not filesystems; works under any filesystem.
- Network-based: typically over dedicated TCP connection between cluster nodes.
- Synchronous and asynchronous: configurable per resource; protocols A (async), B (semi-sync), C (sync).
- Active-passive default: only primary node accepts writes; secondary mirrors passively.
- Active-active option: both nodes can accept writes via cluster filesystem (OCFS2, GFS2).
- Quorum support: DRBD 9 added quorum protocol to prevent split-brain.
- Common deployment: 2-node high-availability clusters with Pacemaker for automated failover.
Enterprise SAN replication
Major SAN vendors provide synchronous and asynchronous mirroring at array level:
- NetApp SyncMirror: synchronous local mirroring within FlexPod or MetroCluster configurations.
- NetApp SnapMirror: asynchronous geographic replication using snapshot differentials.
- Pure ActiveCluster: synchronous metro replication providing zero-RPO high availability.
- Pure ActiveDR: asynchronous geographic disaster recovery.
- Dell PowerMax SRDF: Symmetrix Remote Data Facility; supports synchronous, asynchronous, and metro variants.
- HPE 3PAR/Primera Peer Persistence: synchronous replication with transparent failover.
- IBM SAN Volume Controller: Metro Mirror (sync) and Global Mirror (async).
Cloud platform mirroring
Major cloud providers offer mirroring through their managed disk and storage services:
- AWS EBS Multi-AZ: automatic synchronous replication across availability zones in same region.
- Azure Zone-Redundant Storage (ZRS): synchronous replication across three availability zones.
- Azure Geo-Redundant Storage (GRS): asynchronous replication to secondary region.
- Google Regional Persistent Disk: synchronous replication across two zones in a region.
- Cross-region replication: all major clouds offer asynchronous replication to remote regions for DR.
- S3-style replication: object storage replication is a form of async mirroring.
The Split-Brain Problem
Split-brain is the most serious failure mode in distributed mirroring systems and a defining concern for DRBD, SAN replication, and cluster filesystems.
What split-brain means
The DRBD split-brain reference describes the condition: “Split-brain means that the contents of the backing devices of your DRBD resource on both sides of your cluster started to diverge. At some point in time, the DRBD resource on both nodes went into the Primary role while the cluster nodes themselves were disconnected from each other. Different writes happened to both sides of your cluster afterwards.”6 The sequence:
- Mirror peers operating normally; one is primary, one is secondary.
- Network partition occurs; peers can no longer communicate.
- Without proper fencing, both peers may assume the other failed and become primary.
- Both primaries accept independent writes during the partition.
- Network restored; peers attempt to reconnect.
- Mirror system detects divergence; cannot automatically resolve.
- Manual intervention required to choose which writes survive.
Detection and symptoms
The xahteiwi DRBD reference describes detection: “The split-brain is detected once the peers reconnect and do their DRBD protocol handshake; the symptoms of a split-brain are that the peers will not reconnect on DRBD startup but stay in connection state StandAlone or WFConnection.”7 Common symptoms:
- Mirror peers won’t reconnect after network is restored.
- Connection state shows StandAlone or WFConnection.
- Application reports inconsistencies between primary and secondary.
- System logs show split-brain detection messages.
- Both nodes may report Primary status simultaneously.
Manual recovery procedure
The DRBD recovery reference describes the manual resolution: “After split brain has been detected, one node will always have the resource in a StandAlone connection state. You must manually intervene by selecting one node whose modifications will be discarded (this node is referred to as the split brain victim).”8 The procedure:
- Identify victim and survivor: choose which node’s modifications will be preserved.
- Backup victim data first: the modifications on victim will be lost; back them up before discarding.
- Switch victim to secondary:
drbdadm secondary resource - Disconnect victim:
drbdadm disconnect resource - Reconnect with discard flag:
drbdadm -- --discard-my-data connect resource - Resync from survivor: resynchronization happens automatically once reconnected.
- Verify state: wait for resync completion; confirm both peers consistent.
Prevention via fencing
Modern systems prevent split-brain through several mechanisms:
- STONITH (Shoot The Other Node In The Head): primary forces secondary offline before claiming exclusive access.
- Hardware fencing: IPMI or PDU-based power control to enforce single-primary policy.
- Quorum protocols: require majority of cluster members to agree before allowing writes.
- Tiebreaker witness: third node or service that votes when primary cluster members are split.
- Network redundancy: multiple network paths reduce probability of complete partition.
DRBD Quorum (DRBD 9)
The LINBIT DRBD Quorum reference describes the modern approach: DRBD 9 added a Quorum feature that enforces majority requirement for primary status. Properties:
- Majority requirement: nodes only become primary if they can reach a majority of cluster members.
- Three-node minimum: typical deployment with 3 nodes provides quorum even when one fails.
- Automatic outside-quorum action: non-quorum nodes can be configured to suspend I/O, freeze, or shut down.
- Eliminates most split-brain scenarios: nodes outside quorum cannot accept writes that would diverge.
- Alternative to fencing: simpler than STONITH for many deployments.
Mirroring vs Backup: When Mirroring Isn’t Enough
The most-important property of disk mirroring for data protection planning is what it does not protect against. Mirroring duplicates writes in real time, which means it duplicates problems in real time too.
What mirroring protects against
Mirroring provides specific failure mode protection:
- Single drive hardware failure: drive electronics, mechanical, or controller failure on one mirror member.
- Disk surface errors: unreadable sectors recovered from healthy mirror copy.
- Single-point hardware failure: power supply failure on one node (with proper redundant power).
- Filesystem-level corruption (ZFS/Btrfs only): self-healing mirrors detect and correct corrupt blocks.
- Network failure (DRBD only): cluster failover when one node loses network.
- Site failure (geographic mirroring): data center loss with offsite replication.
What mirroring does NOT protect against
Several critical failure modes propagate immediately to the mirror:
- Ransomware encryption: attacker encrypts files; encrypted files are mirrored to all copies in real time.
- Accidental deletion: rm or DELETE command propagates to mirror immediately; deleted file is gone from both copies.
- Application bugs corrupting data: corrupt writes are mirrored; corruption exists on all copies.
- Malicious modification: insider threats or compromised accounts modify data on all mirror copies.
- Failed updates: bad OS patch or application upgrade affects all mirror members.
- Logical filesystem corruption: a corrupted filesystem is mirrored as a corrupted filesystem.
- Human error: dropping the wrong table or formatting the wrong volume affects the mirror.
The combined strategy
Effective data protection layers mirroring with backups:
- Mirroring for hardware fault tolerance: RAID 1, RAID 10, or equivalent for continued operation through drive failure.
- Snapshots for rapid local rollback: hourly or daily snapshots for accidental deletion and corruption.
- Backups for disaster recovery: daily incremental or differential backups to separate storage.
- Off-site backup copies: at least one backup off-site per 3-2-1 rule for site disaster.
- Immutable backup copies: at least one backup that cannot be modified, for ransomware recovery.
- Periodic verification: hash verification of all mirror and backup copies.
Mirror break for backup integration
One traditional pattern uses mirroring as a foundation for backup operations:
- Three-way mirror baseline: 3 copies of data normally maintained.
- Break one mirror for backup: detach one copy temporarily.
- Backup detached copy: read at full speed without affecting production.
- Reattach and resync: mirror member rejoins; only changed blocks need resync.
- Modern alternative: snapshot-based backup typically replaces this pattern.
- Use case: still relevant for very large data sets where snapshot overhead is impractical.
Disk mirroring is the high-availability mechanism that keeps systems running through drive failures and (in network mirroring) through node failures, but mirroring duplicates writes in real time including the destructive ones. For data recovery purposes, the practical implication is that mirroring protects against the failure modes recovery practitioners see least often (drive hardware failure with intact filesystem) while providing zero protection against the failure modes recovery practitioners see most often (deletion, corruption, ransomware, human error). The Veeam canonical guidance applies equally to mirrors as to snapshots: a mirror is not a backup.
For users facing recovery scenarios involving mirrored systems, the practical guidance follows the failure mode. If a single drive failed, replace it and let the mirror resilver; data continues to be available throughout. If both drives in a 2-way mirror failed simultaneously (rare), data recovery requires backup copies on separate storage. If files were accidentally deleted or modified, the mirror has the same deleted or modified state as the source; recovery requires snapshots or backups predating the change. If ransomware encrypted the source, the mirror contains encrypted copies; recovery requires offline or immutable backup copies. Standard data recovery software applies when both mirror copies have failed and no backups exist; HDD-focused recovery tools address physical drive failures within mirrors. Cleanroom recovery services handle catastrophic physical damage to mirror members. The strongest data protection pairs mirroring (availability) with backup copies on different storage (preservation).
Disk Mirroring FAQ
Disk mirroring is the real-time replication of logical disk volumes onto separate physical drives to ensure continuous availability. The Wikipedia disk mirroring reference describes it as the replication of logical disk volumes onto separate physical hard disks in real time to ensure continuous availability; it is most commonly used in RAID 1, where a mirrored volume is a complete logical representation of separate volume copies. When data is written to one drive, the storage system simultaneously writes the same data to the mirror drive (or drives); when a drive fails, the system continues operating from the surviving mirror with no interruption. The TechTarget disk mirroring reference describes the broader usage: disk mirroring is sometimes used in a broader sense to describe any type of disk replication, but in most cases, it is meant within the context of RAID 1. The technique extends beyond RAID 1 to include software mirroring (Windows dynamic disks, Linux mdadm, ZFS mirror vdevs, Btrfs RAID 1 profiles), network mirroring (DRBD between Linux hosts), and SAN-level synchronous replication (NetApp SyncMirror, Pure ActiveCluster, Dell PowerMax SRDF). Mirroring is typically synchronous: writes complete only after both copies have been written. The fundamental trade-off is 50% capacity penalty (two drives store one drive of unique data) in exchange for continued operation through single-drive failure.
Synchronous and asynchronous mirroring differ in when the storage system acknowledges writes as complete. With synchronous mirroring, the system waits for both the local write and the mirror write to finish before reporting success to the application; both copies are always identical, but write latency is constrained by the slowest replication path. With asynchronous mirroring, the system reports success after the local write completes; the mirror write happens shortly after; recent writes may not have replicated when the source fails, creating a recovery point objective gap. The DRBD documentation describes the distinction: with synchronous mirroring, applications are notified of write completions after the writes have been carried out on all hosts; with asynchronous mirroring, applications are notified of write completions when the writes have completed locally, which usually is before they have propagated to the other hosts. Synchronous mirroring is the standard for local mirroring (RAID 1, ZFS mirror, DRBD over LAN) where latency between mirrors is sub-millisecond. Asynchronous mirroring is used for long-distance replication (geographic disaster recovery) where synchronous latency would be unacceptable for application performance. The Wikipedia mirroring reference notes: replication can be performed synchronously, asynchronously, semi-synchronously, or point-in-time; mirroring is typically only synchronous.
Split-brain occurs when both mirror peers continue operating while the network between them fails, causing both copies to accept independent writes and diverge. The DRBD split-brain reference describes the condition: split-brain means that the contents of the backing devices of your DRBD resource on both sides of your cluster started to diverge; at some point in time, the DRBD resource on both nodes went into the Primary role while the cluster nodes themselves were disconnected from each other; different writes happened to both sides of your cluster afterwards. After the network reconnects, the mirror system cannot automatically determine which set of writes is correct because both peers were operating legitimately. Recovery requires manual intervention: choose a split-brain victim whose modifications will be discarded; choose a split-brain survivor whose data is preserved; resynchronize the victim from the survivor. DRBD provides commands for this manual resolution: drbdadm secondary resource (on victim), drbdadm disconnect resource, drbdadm — –discard-my-data connect resource. Modern systems prevent split-brain through fencing (physically powering off a partitioned node), STONITH (Shoot The Other Node In The Head), or quorum protocols (require majority of nodes to operate). DRBD 9 added a Quorum feature that enforces majority requirement before allowing writes, eliminating most split-brain scenarios.
Disk mirroring is supported across a wide range of platforms and technologies. Hardware RAID 1: dedicated RAID controllers from LSI/Broadcom, Adaptec, Areca, and integrated RAID on motherboard chipsets manage mirroring at the storage adapter level, transparent to the operating system. Software RAID 1: Linux mdadm provides flexible software RAID with detailed control; Windows offers dynamic disk mirrors and Storage Spaces with two-way mirror or three-way mirror; macOS supports software mirroring via Disk Utility for HFS+ and APFS volumes. Filesystem-native mirroring: ZFS mirror vdevs provide self-healing mirrors with checksum-based corruption detection; Btrfs RAID 1 profile offers similar capabilities for Linux; both repair corrupted blocks from the healthy mirror automatically. Network mirroring: DRBD (Distributed Replicated Block Device) creates network-based RAID 1 between two or more Linux hosts, supporting both synchronous and asynchronous replication. Enterprise SAN replication: NetApp SyncMirror provides synchronous local mirroring within or between arrays; NetApp SnapMirror provides asynchronous geographic replication; Pure Storage ActiveCluster provides metro synchronous replication; Dell PowerMax SRDF provides synchronous and asynchronous remote replication; HPE 3PAR Peer Persistence provides similar capabilities. Cloud equivalents: AWS EBS Multi-AZ provides synchronous mirroring; Azure Zone-Redundant Storage replicates synchronously across availability zones; Google Cloud Regional Persistent Disk provides synchronous mirroring within a region.
No. Disk mirroring is a high-availability mechanism, not a backup, and conflating the two is one of the most-common causes of data loss in environments that thought they were protected. Mirroring duplicates writes in real time to the mirror, including: ransomware encryption (the encrypted file is immediately replicated to the mirror, destroying both copies); accidental deletion (rm or DELETE command propagates to the mirror immediately); file corruption from buggy applications (corrupt data is mirrored along with valid data); malicious modification by attackers or insiders; failed updates that corrupt files. The mirror provides protection only against drive hardware failure, not against logical or human errors. The Nfina RAID mirroring reference describes the protection scope: if one drive fails, the other drives in the array will continue to function, providing uninterrupted access to the data; this is high availability, not data preservation across time. Effective data protection requires combining mirroring (for availability) with separate backup copies (for protection against ransomware, deletion, corruption, disasters). The 3-2-1 backup rule applies regardless of whether mirroring is in place: maintain 3 copies of data, on 2 different media types, with 1 copy offsite. The mirror counts as a single storage location for 3-2-1 purposes; backups must reside on different storage to provide independent protection.
Mirroring requires storing the same data twice (or more), so the usable capacity is half (or less) of the raw storage capacity. The DiskInternals RAID mirroring reference describes the trade-off: mirroring takes much of your RAID’s total storage capacity, 50% of the RAID capacity actually; if you make a RAID 1 with 4 1TB SSDs, you may expect the RAID to offer you a total of 4 TB storage space, but no, it will offer you just 1 TB as the total storage, which is equal to the storage of just one SSD out of the four you used. Standard 2-way mirror with 2 drives: usable capacity equals one drive’s capacity. 3-way mirror with 3 drives: usable capacity equals one drive’s capacity (66% capacity penalty); tolerates 2 simultaneous failures. ZFS supports n-way mirrors with arbitrary number of drives; capacity equals smallest drive’s capacity regardless of mirror count. The capacity penalty is the inherent trade-off for the redundancy guarantee: the system must have at least one copy of every byte at all times, so storage requirements are at least 2x the unique data size. Performance characteristics partially offset the cost: read operations can be parallelized across all mirror copies, providing read speed improvement scaling with mirror count; write operations must complete on all copies, so write speed is approximately equal to the slowest single drive.
Related glossary entries
- RAID 1: the most-common implementation of disk mirroring at the array level.
- RAID 10: combines mirroring with striping for redundancy plus performance.
- RAID: the broader array technology framework that includes mirroring among its levels.
- Storage Snapshot: complementary protection providing point-in-time recovery vs mirror’s real-time availability.
- Backup vs Archive: the strategic context distinguishing mirrors, backups, and archives.
- 3-2-1 Backup Rule: the principle that mirrors alone do not satisfy; backups on separate storage are required.
- RAID Rebuild vs Recovery: the operations involved when a mirror member is replaced or fails.
Sources
- Wikipedia: Disk mirroring (accessed May 2026)
- TechTarget: What is disk mirroring (RAID 1)?
- DiskInternals: RAID 1 and Disk Mirroring
- LINBIT DRBD documentation: DRBD 9.0 user guide
- ipserverone: DRBD split-brain recovery
- xahteiwi: Solve a DRBD split-brain in 4 steps
- xahteiwi DRBD split-brain detection
- Recital: DRBD split-brain manual recovery
- Nfina: Comprehensive Guide to RAID Mirroring
About the Authors
Data Recovery Fix earns revenue through affiliate links on some product recommendations. This does not influence our reference content. Glossary entries are written and reviewed independently based on documented research, vendor documentation, independent testing, and recovery-engineer review. If anything on this page looks inaccurate, outdated, or worth revisiting, please reach out at contact@datarecoveryfix.com and we’ll review it promptly.
