RAID 6: Striping with Double Distributed Parity

RAID 6

The modern parity-based default for multi-TB arrays. RAID 6 stripes data across four or more disks while maintaining two distinct parity blocks per stripe (commonly P and Q), distributed evenly across all members. The dual-parity scheme tolerates two simultaneous disk failures: any pair of member disks can fail without data loss. Capacity efficiency is (N-2)/N: 50% for 4 disks, 67% for 6 disks, 75% for 8 disks. RAID 6 has displaced RAID 5 as the default for large arrays because the second parity provides a mathematical fallback when surviving drives encounter URE during rebuild, addressing the failure mode that makes RAID 5 increasingly risky on multi-TB drives.

Reference content reviewed by recovery engineers. Editorial standards. About the authors.
📚
9 sources
DiskInternals · IONOS
arXiv · Network-Switch
P + Q parity
XOR + Reed-Solomon
Min 4 disks, 2-disk tolerance
📅
Last updated
Standard RAID levels
📖
9 min
Reading time

RAID 6 is a parity-based RAID configuration that stripes data across four or more disks while maintaining two distinct parity blocks per stripe (commonly called P and Q), distributed across all member disks. The P parity uses standard XOR (the same calculation as RAID 5); the Q parity uses Reed-Solomon coding over a Galois Field. The dual-parity scheme provides tolerance for two simultaneous disk failures: any two member disks can fail without data loss. RAID 6 achieves (N-2)/N capacity efficiency and has become the modern default for arrays with multi-TB drives because it provides a mathematical fallback when surviving disks encounter URE during rebuild.

What RAID 6 Is

The DiskInternals RAID 6 reference captures the configuration’s core: “Double parity, central to RAID 6’s robustness, refers to the method of storing two sets of parity information across the disks. Unlike RAID 5, which relies on single parity to reconstruct lost data from a single drive failure, RAID 6 uses double parity to safeguard against potential losses due to dual drive failures. This not only enhances redundancy but also significantly raises the threshold of data protection, making RAID 6 an ideal solution for environments where data integrity and uptime are critical.”1

Why RAID 6 emerged

RAID 6 emerged in response to a specific reliability problem with RAID 5 on growing storage capacities. As individual disk capacities expanded from megabytes to multi-TB sizes, RAID 5 rebuild operations began reading data volumes that exceeded the Unrecoverable Read Error (URE) threshold of typical drives, making rebuild failure increasingly likely on a population basis. The arXiv RAID 6 paper captures the trend explicitly: “Because of this danger, RAID-5 is currently being replaced with RAID-6, which offers protection against double failure of drives within the array. RAID-6 refers to any technique where two strips of redundant data are added to the strips of user data, in such a way that all the information can be restored if any two strips are lost.”2

The minimum disk count and capacity math

RAID 6 requires a minimum of 4 disks: 2 for data and 2 for parity (P and Q), all distributed across the array. The capacity efficiency follows the (N-2)/N formula:

Disk countUsable capacityParity overheadEfficiency
4 disks2/4 of total2/4 (two disks’ worth)50%
5 disks3/5 of total2/560%
6 disks4/6 of total2/667%
8 disks6/8 of total2/875%
12 disks10/12 of total2/1283.3%
16 disks14/16 of total2/1687.5%

RAID 6 has lower capacity efficiency than RAID 5 (one extra disk’s worth of overhead), but the trade-off is rebuild safety on large arrays. For 4-disk arrays, RAID 6’s 50% efficiency matches RAID 10 while providing different fault tolerance characteristics; for larger arrays, the efficiency gap narrows.

Where RAID 6 fits

RAID 6 represents the “rebuild-safe” point in the parity-based RAID design space:

  • Performance: read performance similar to RAID 5; write performance reduced by dual parity calculation.
  • Capacity efficiency: moderate ((N-2)/N).
  • Redundancy: double-disk fault tolerance.
  • Cost: moderate (two disks’ overhead regardless of array size).
  • Rebuild safety: excellent (Q parity provides URE fallback).

Enterprise and consumer deployment

The Network-Switch RAID parity reference captures the broad enterprise deployment: “Cisco UCS, Dell EMC Unity XT, and HPE Alletra use RAID 6 or its variants (such as RAID-DP or RAID-TEC). NetApp employs RAID-DP (Double Parity) and RAID-TEC (Triple Parity) for enterprise data protection.”3 Consumer-grade RAID 6 deployments include:

  • Synology SHR-2 (Synology Hybrid RAID 2): RAID 6 layered with mixed-disk-size optimization.
  • QNAP RAID 6: default option for 4+ disk arrays in QNAP NAS units.
  • Linux md-raid (RAID 6): standard kernel RAID 6 implementation.
  • ZFS RAID-Z2: ZFS’s double-parity RAID variant; functionally equivalent to RAID 6 with checksum-based corruption detection.
  • Btrfs RAID 6: Btrfs implementation; remains marked as experimental due to known issues.
  • Windows Storage Spaces dual parity: the Microsoft equivalent.

Triple parity variants

Some enterprise systems extend the RAID 6 concept to triple parity, providing tolerance for three simultaneous disk failures. The igoro.com RAID 6 dual parity guide captures the extension principle: “It is possible to calculate additional parities to protect against additional device failures.”4 Production examples:

  • NetApp RAID-TEC: three parity disks; tolerates three failures.
  • ZFS RAID-Z3: ZFS triple-parity variant.
  • BeeGFS triple parity: high-performance computing storage variant.

Triple parity is increasingly relevant for the largest enterprise arrays where rebuild times stretch into days and the probability of a third failure during rebuild becomes meaningful.

The P+Q Double Parity Architecture

RAID 6’s defining feature is the dual parity mechanism. Understanding the P+Q architecture clarifies both how recovery works and why Reed-Solomon coding is necessary.

The two parity types

The IONOS RAID 6 documentation describes the parity scheme: “When it comes to parity, RAID 6 differs from other levels: the system always saves two sets of parity information. In that way, associated data can be restored if one or two disks fail. For this purpose, a RAID 6 system can optionally use the XOR logic or a mix of XOR logic and multi-bit error correction using Reed-Solomon code.”5 The two parity blocks per stripe:

  • P parity: standard XOR of all data blocks in the stripe (same as RAID 5).
  • Q parity: Reed-Solomon coding over a Galois Field; produces position-dependent values.

Why two parity types are needed

The DiskInternals RAID 6 documentation captures the requirement: “RAID 6 employs advanced techniques like XOR operations and Reed-Solomon coding to create two distinct parity blocks. These methods ensure a comprehensive redundancy scheme. When writing parity data, the process involves calculating two separate sets of parity: the first using XOR operations, similar to RAID 5, and the second using more complex Reed-Solomon code for additional protection.” Two XOR-based parity blocks would not provide additional information; they would be linearly dependent and could not solve a system with two unknowns. The Reed-Solomon parity provides mathematically independent information that, combined with the XOR parity, enables recovery of any two missing data blocks.

The P+Q+data layout for a 6-disk array

For a 6-disk RAID 6 array, a typical stripe layout looks like:

StripeDrive 0Drive 1Drive 2Drive 3Drive 4Drive 5
Stripe 1D1D2D3D4PQ
Stripe 2D5D6D7PQD8
Stripe 3D9D10PQD11D12
Stripe 4D13PQD14D15D16

P and Q rotate together across stripes; this distributes parity load evenly. Different controllers use different rotation patterns (left-asymmetric, left-symmetric, right-asymmetric, right-symmetric); the patterns matter for software-based recovery because reconstruction tools need to know the exact parity placement.

The PD and RS naming convention

The anadoxin RAID 6 explanation uses an alternative naming that clarifies the role of each parity: “In order to use the same error recovery technique as RAID 6 uses, you’ll need two additional disk drives, the PD drive, and the RS drive. The special PD drive (named after Parity Drive, sometimes called P in whitepapers) contains the XOR data, generated automatically from D1, D2 and D3. The second special RS drive (named after Reed-Solomon Drive, sometimes also called Q) contains the Reed-Solomon codes, calculated from the same data as PD.”6 The PD/RS terminology is less common than P/Q but conceptually clearer for new readers.

Single-disk failure recovery

When one disk fails in a RAID 6 array, recovery uses the same XOR approach as RAID 5: the failed disk’s data is reconstructed by XORing the surviving data blocks with the P parity (Q parity is not strictly needed for single-disk recovery). The recovery is fast and computationally simple. Single-disk RAID 6 recovery is essentially a RAID 5 recovery on the data + P-parity subset of the array.

Two-disk failure recovery

Two-disk failure recovery is where RAID 6’s Reed-Solomon mathematics become essential:

  1. Identify which two disks failed.
  2. Read the surviving data blocks plus both P and Q parity blocks for each affected stripe.
  3. Apply Reed-Solomon arithmetic to solve the system of two equations with two unknowns.
  4. The mathematics produce the correct values for the two missing data blocks.
  5. Reconstruct the failed disks’ data accordingly.

This calculation is more expensive than single-disk XOR recovery but completes in reasonable time for typical arrays. Hardware RAID controllers include dedicated Reed-Solomon acceleration; software RAID 6 implementations use CPU-based Reed-Solomon, which is fast on modern processors with SIMD instructions.

Reed-Solomon Coding and the XOR Limitation

The mathematical underpinnings of RAID 6 are worth understanding because they explain why XOR alone is insufficient and why Reed-Solomon enables two-disk recovery.

Why XOR alone cannot recover two failures

The raid-recovery-guide.com parity reference captures the limitation precisely: “RAID 6 uses two different functions to calculate the parity. This is because the results of XOR function do not depend on the position of the original data: 1 XOR 0 = 1, 0 XOR 1 = 1, and, in general, P(A,B) = P(B,A). For a RAID 6 it is not enough just to add one more XOR function. If two disks in a RAID 6 array fail, it is not possible to determine data blocks location using the XOR function alone.”7 The mathematical issue:

  • If P = D1 ⊕ D2 ⊕ D3 (XOR is commutative), then P doesn’t tell you which value came from which position.
  • With one missing block, the XOR can recover it: knowing P and 2 of {D1, D2, D3}, the third is determined.
  • With two missing blocks, two equations are needed. A second XOR equation provides the same kind of information; the system is underdetermined.
  • A position-dependent function is needed to break the symmetry.

Reed-Solomon as the position-dependent function

The raid-recovery-guide.com reference continues: “Thus in addition to the XOR function, RAID 6 arrays utilize Reed-Solomon code that produces different values depending on the location of the data blocks, so that Q(A,B) ≠ Q(B,A).” Reed-Solomon coding involves multiplication by position-specific coefficients in a Galois Field; the result is that swapping two data blocks produces a different Q value, providing the position information that XOR alone cannot.

The Galois Field GF(2^8) implementation

The igoro.com RAID 6 dual parity reference describes the practical implementation: “In the end, a GF(2^8) field gives us exactly what we wanted. It is a field of size 2^8, and it gives us efficient +, -, ×, / operators that we can use to calculate P and Q, and if needed recalculate a lost data block from the remaining data blocks and parities. And, each of P and Q perfectly fits into a byte. The resulting coding is called Reed-Solomon error correction, and that’s the method used by RAID 6 configuration.” Key properties of GF(2^8):

  • Field size 2^8 = 256, fitting exactly in one byte.
  • Addition is XOR (same as bitwise XOR of the bytes).
  • Multiplication uses polynomial multiplication modulo an irreducible polynomial.
  • Division uses the inverse element computed via the extended Euclidean algorithm.
  • Both P and Q parity values fit in single bytes, making storage efficient.

The 255-data-disk theoretical maximum

The arXiv RAID 6 paper describes the academic reference scheme: “A well-known RAID 6 scheme is based on the rate-255/257 Reed-Solomon code. In this scheme two extra disks are introduced for up to 255 disks of data and two parity bytes are computed per 255 data bytes. Hardware implementation of RS-based RAID 6 is as simple as operations in GF(2^8), which are byte-based. Addition of bytes is just a bitwise XOR. Multiplication of bytes corresponds to multiplication of boolean polynomials modulo an irreducible polynomial.” The 255-data-disk maximum is a mathematical limit of the GF(2^8) field; larger arrays would require GF(2^16) or alternative coding schemes.

Television and DVB connection

The IONOS RAID 6 reference notes a fascinating crossover use of Reed-Solomon: “The latter is also required to transmit television signals according to the DVB standard, where it improves the bit error rate of the received signal.” Reed-Solomon codes are not specific to storage; they’re used in CDs and DVDs for scratch resilience, in DVB-T television broadcasting for signal robustness, in QR codes for damage tolerance, in deep-space communication (Voyager probes), and in modern 5G cellular networks. RAID 6 borrows a well-established error-correction primitive rather than inventing a new one.

XOR-only RAID 6 alternatives

The USPTO patent literature documents alternative RAID 6 implementations that avoid Reed-Solomon arithmetic in favor of multiple XOR operations: “RAID 6 provides for recovery from a two-drive failure, but at a penalty in cost and complexity of the array controller because the Reed-Solomon codes are complex and may require significant computational resources. The complexity of Reed-Solomon codes may preclude the use of such codes in software and may necessitate the use of expensive special purpose hardware. Thus, implementation of Reed-Solomon codes in a disk array increases the cost and complexity of the array. Unlike the simpler XOR codes, Reed-Solomon codes cannot easily be distributed among dedicated XOR processors.” Alternative RAID 6 schemes:

  • EVENODD codes: XOR-only scheme; uses two XOR-based parities along orthogonal directions.
  • Row-diagonal parity: NetApp’s RAID-DP uses this approach.
  • Cyclic group codes: alternative linear block codes for two-erasure correction.

These alternatives have similar fault tolerance to Reed-Solomon RAID 6 but different performance characteristics; the choice between them is implementation-specific.

Performance and Capacity Characteristics

RAID 6’s performance characteristics differ from RAID 5 in predictable ways. Understanding the differences clarifies when RAID 6 is appropriate vs alternatives.

Read performance

RAID 6 read performance is comparable to RAID 5:

  • Sequential reads: close to (N-2) × single-disk bandwidth (parity blocks aren’t read normally).
  • Random reads: can scale up to (N-2) × single-disk IOPS.
  • Mixed workloads: read performance roughly tracks RAID 0 with two fewer effective disks.
  • Degraded reads: after disk failure, reads to the failed disk’s location require XOR or Reed-Solomon computation; this slows reads slightly but doesn’t break functionality.

Write performance and dual parity penalty

RAID 6 write performance is reduced compared to RAID 5 because two parity blocks must be updated per write. The IEEE Reed-Solomon RAID 6 paper captures the issue: “When RAID 6 is implemented by Reed-Solomon (RS) codes, the penalty of the writing performance is on the field multiplications in the second parity.”8 Specific write characteristics:

  • Full-stripe writes: approximately (N-2) × single-disk bandwidth (both parities computed in parallel).
  • Partial-stripe small writes: 6 disk operations per logical write (3 reads + 3 writes for data, P, and Q); substantial penalty.
  • Hardware RS acceleration: hardware controllers with dedicated Reed-Solomon engines minimize the second-parity penalty.
  • Software RAID 6: modern CPUs with SSE/AVX2 instructions can compute Reed-Solomon at multi-GB/s speeds; the penalty is small in practice.
  • Database OLTP workloads: the partial-stripe penalty hurts here too; many shops put busy transactional volumes on RAID 10 instead and reserve RAID 6 for archival or read-mostly tiers.

Rebuild performance

RAID 6 rebuild characteristics are similar to RAID 5 but with key safety improvements:

  • Single-disk rebuild uses XOR (no Reed-Solomon math); same speed as RAID 5 single-disk rebuild.
  • Two-disk rebuild uses Reed-Solomon math; approximately 2x slower than single-disk rebuild.
  • Rebuild duration scales with disk size; modern multi-TB drives can take 12-48 hours.
  • Array remains operational during rebuild.
  • Critical advantage: if a URE occurs on a surviving disk during single-disk rebuild, the Q parity provides a mathematical fallback to recover the affected sector; RAID 5 has no equivalent fallback.

The “rebuild safety” advantage quantified

The DiskInternals RAID 6 reference captures the safety benefit: “RAID 6’s robustness, refers to the method of storing two sets of parity information across the disks… safeguards against potential losses due to dual drive failures.” On large arrays, the URE math that makes RAID 5 risky becomes manageable in RAID 6:

  • RAID 5 rebuild URE causes rebuild abort and potential data loss.
  • RAID 6 rebuild URE is recoverable: the Q parity reconstructs the missing sector.
  • RAID 6 only fails if URE occurs in TWO different disks at the same stripe location during rebuild, a much lower probability.
  • For a 4-disk RAID 5 of 16 TB drives, expected UREs during rebuild ≈ 3.84 (likely failure); for the same array as RAID 6, expected double-URE-at-same-location ≈ negligible.

Cost and capacity trade-off

RAID 6’s overhead vs RAID 5:

  • One additional disk’s capacity consumed for parity (regardless of array size).
  • Slightly slower writes due to dual parity calculation.
  • Dramatically improved rebuild safety on large arrays.
  • Same minimum complexity and tooling as RAID 5 from administrator perspective.

For arrays of 4+ disks with multi-TB drives, the rebuild safety improvement justifies the extra disk’s overhead in most deployments.

RAID 6 and Data Recovery

RAID 6 recovery scenarios differ from RAID 5 in important ways. The double parity provides additional resilience but also additional complexity in some recovery operations.

Single-disk failure recovery

The most common RAID 6 maintenance scenario: one disk has failed, the others are healthy. The standard process:

  1. Locate the failed member through the array management console or hardware indicator LEDs.
  2. Run health checks on the remaining drives (vendor diagnostic tool or SMART self-test).
  3. Slot in a healthy replacement; if the chassis supports hot-swap, the controller picks it up automatically.
  4. Kick off the rebuild; the controller reconstructs the missing data using XOR of surviving data blocks against the P parity.
  5. Watch progress in the management UI; multi-TB rebuilds typically span between half a day and a full day.
  6. When the array returns to fully redundant status, schedule a parity scrub to confirm both P and Q values reconcile.

Crucially, if a URE occurs during this single-disk rebuild, the Q parity can recover the affected sector; this is the failure mode that distinguishes RAID 6 safety from RAID 5 risk.

Two-disk simultaneous failure recovery

RAID 6’s defining capability: two disks can fail simultaneously without data loss. The recovery process:

  1. Identify both failed disks; verify the remaining members are healthy.
  2. Replace both failed disks with healthy replacements.
  3. Initiate dual-disk rebuild; the controller uses Reed-Solomon math to solve for both missing data sets.
  4. The rebuild is approximately twice as long as single-disk rebuild because both Q-based recovery operations must complete.
  5. Array remains in degraded state with reduced redundancy until rebuild completes.
  6. After completion, a scrub verifies both parity blocks for all stripes.

Three-disk failure scenario

When three disks die at once, RAID 6 cannot recover the array on its own:

  • The Reed-Solomon scheme handles at most two erasures per stripe; three unknowns leave the system underdetermined.
  • Where each of the three drives still has some readable sectors, a recovery lab can image the salvageable areas and stitch together a partial reconstruction.
  • If even one drive is mechanically dead or has a fried PCB, restoring from backup is usually the only realistic option.
  • Triple-failure odds were once treated as effectively negligible, but as drives age in unison and rebuild windows lengthen, the scenario is no longer purely theoretical.

Controller failure recovery

When the RAID HBA dies but every drive is intact, several recovery routes exist:

  • Drop in an identical-model replacement controller: the new card typically reads the on-disk metadata and brings the array back online unchanged.
  • Cross-vendor reassembly via specialist software: products like UFS Explorer can reconstruct RAID 6 directly from member drives provided the metadata layout is documented.
  • Generic RAID-aware recovery suites: R-Studio, ReclaiMe, and DiskInternals RAID Recovery cover most controller-loss situations with automatic layout detection.
  • mdadm-based assembly on Linux: arrays built on Linux-friendly hardware can sometimes be re-imported by the kernel’s md driver after metadata interpretation.

Reconstruction parameters for software recovery

Software-based RAID 6 reconstruction requires identification of more parameters than RAID 5:

  • Disk order: which physical disk was Disk-0, Disk-1, etc.
  • Stripe size: the chunk size used during array creation.
  • P parity rotation: left-asymmetric, left-symmetric, right-asymmetric, or right-symmetric.
  • Q parity placement: typically rotates one position offset from P; some controllers use different placement.
  • Reed-Solomon coefficient scheme: different vendors use different generator polynomials.
  • Start offset: where the RAID data begins on each disk.

The Reed-Solomon coefficient differences between vendors are the main reason that software recovery of RAID 6 from arbitrary controllers is more complex than RAID 5; getting the wrong coefficients produces wrong reconstruction.

RAID 6 alternative implementations

The raid-recovery-guide.com reference documents several non-standard RAID 6 placement schemes that complicate recovery:

  • Wide pace parity: Promise controllers use this; parity moves more than one column per stripe.
  • Delayed parity (HP Smart Array): parity block size differs from data block size.
  • Storage Spaces arbitrary disk order: Microsoft’s implementation changes disk order over hundreds-of-megabytes intervals.

These variations are why RAID 6 recovery often requires controller-specific knowledge or specialized tools that understand vendor-specific formats.

Recovery tools for RAID 6

Software products that handle RAID 6 reassembly include:

  • R-Studio: commercial tool with RS-aware automatic detection across the major RAID levels including dual-parity arrays.
  • ReclaiMe Free RAID Recovery: free utility that auto-detects layout for parity-based arrays.
  • DiskInternals RAID Recovery: Windows tool listing RAID 6 (and the related ZFS/Btrfs equivalents) among its supported formats.
  • UFS Explorer Professional Recovery: particularly strong on cross-vendor RAID 6 layouts and unusual stripe patterns.
  • Linux mdadm: open-source software RAID 6 driver; useful for Linux-native arrays in degraded states.
  • RAID Reconstructor: dedicated reverse-engineering tool that supports RAID 6 layouts.

Professional services for RAID 6

Engaging a recovery lab makes sense for RAID 6 when:

  • At least three drives need cleanroom-grade physical work to be readable. Cleanroom recovery is the prerequisite to any further reconstruction.
  • The hardware controller used a proprietary Reed-Solomon configuration that off-the-shelf software cannot interpret correctly.
  • DIY software-based reconstruction has already produced inconsistent or corrupted output.
  • The data on the array has business or sentimental value that warrants the lab pricing typical of multi-drive RAID work.

RAID 6’s appropriate use cases have expanded substantially as drives have grown larger. For arrays of 4+ disks with multi-TB drives, RAID 6 has become the recommended default: the rebuild safety advantage over RAID 5 is mathematically meaningful at modern capacities, and the additional disk overhead is acceptable in exchange. Modern alternatives like ZFS RAID-Z2 and Btrfs RAID 6 add checksum-based silent corruption detection on top of the dual parity; for new deployments, these checksumming-aware variants are often preferable to traditional hardware RAID 6.

For users wondering whether RAID 6 fits their needs, the practical guidance follows array characteristics. RAID 6 fits 4+ disk arrays where rebuild URE risk is real and capacity efficiency matters more than maximum write performance; RAID 5 fits 3-disk arrays with smaller drives where rebuild risk is acceptable; RAID 1 mirroring fits 2-disk systems where simplicity matters most; RAID 10 fits performance-critical workloads where both speed and 2-disk redundancy matter. The “RAID 5 is dead” debate is resolved here: RAID 6 provides the rebuild safety that RAID 5 lacks on large drives, at the cost of one additional disk’s overhead. NAS vendors (Synology, QNAP) increasingly default to RAID 6 / SHR-2 for 4+ disk configurations precisely because of this trade-off.

For users facing potential RAID 6 data loss, the practical guidance reflects the configuration’s failure modes. Single-disk and two-disk failures are generally recoverable through normal RAID maintenance; three-disk failures exceed fault tolerance and typically require backups. URE-induced rebuild failure is much rarer in RAID 6 than RAID 5, but write hole and silent corruption remain real concerns; checksum-based file systems (ZFS, Btrfs) eliminate these. For severe RAID 6 failure scenarios, image all surviving disks before reconstruction attempts; RAID-aware recovery software with Reed-Solomon support handles software-level reconstruction; broader data recovery tools handle the file-system layer once the array is reassembled. Professional services remain appropriate when physical damage requires cleanroom work or when initial software recovery has failed; comprehensive backups remain the most reliable protection against catastrophic failure regardless of RAID configuration.

RAID 6 FAQ

What is RAID 6?+

RAID 6 is a parity-based RAID configuration that stripes data across four or more disks while maintaining two distinct parity blocks per stripe (commonly called P and Q), distributed evenly across all member disks. The P parity uses standard XOR (the same calculation as RAID 5); the Q parity uses Reed-Solomon coding over a Galois Field, typically GF(2^8). The dual parity scheme provides tolerance for two simultaneous disk failures: any two member disks can fail without data loss. RAID 6 achieves (N-2)/N capacity efficiency: 50% for 4 disks, 60% for 5 disks, 67% for 6 disks, 75% for 8 disks. RAID 6 has become the modern default for arrays with multi-TB drives because it provides a mathematical fallback during rebuild when surviving drives encounter Unrecoverable Read Errors.

How does RAID 6 differ from RAID 5?+

RAID 6 extends RAID 5 with a second parity block per stripe. The DiskInternals RAID 6 documentation captures the difference: ‘Unlike RAID 5, which relies on single parity to reconstruct lost data from a single drive failure, RAID 6 uses double parity to safeguard against potential losses due to dual drive failures.’ Specific differences: minimum disk count (RAID 5: 3 disks; RAID 6: 4 disks); fault tolerance (RAID 5: 1 disk; RAID 6: 2 disks); capacity efficiency (RAID 5: (N-1)/N; RAID 6: (N-2)/N); write performance penalty (RAID 6 is slower because two parity blocks must be updated per write); rebuild safety (RAID 6 can rebuild even when surviving disks encounter URE during rebuild, because the Q parity provides a mathematical fallback). For modern arrays with multi-TB drives, the rebuild safety advantage is the primary reason to choose RAID 6 over RAID 5.

Why does RAID 6 use Reed-Solomon coding instead of just XOR?+

RAID 6 needs two parity blocks that are mathematically distinct. The raid-recovery-guide.com documentation captures the limitation of XOR alone: ‘The results of XOR function do not depend on the position of the original data: 1 XOR 0 = 1, 0 XOR 1 = 1, and, in general, P(A,B) = P(B,A). For a RAID 6 it is not enough just to add one more XOR function. If two disks in a RAID 6 array fail, it is not possible to determine data blocks location using the XOR function alone. Thus in addition to the XOR function, RAID 6 arrays utilize Reed-Solomon code that produces different values depending on the location of the data blocks, so that Q(A,B) ≠ Q(B,A).’ The position-dependent property of Reed-Solomon is what makes recovery of two simultaneous failures mathematically possible. Reed-Solomon arithmetic operates over a Galois Field (typically GF(2^8) for byte-sized values); addition is XOR but multiplication uses polynomial operations modulo an irreducible polynomial.

What is the minimum number of disks for RAID 6?+

RAID 6 requires a minimum of 4 disks: 2 for data plus 2 for parity (P and Q). With 4 disks, capacity efficiency is 50% (two disks’ worth of capacity consumed by parity, distributed across the array). Practical RAID 6 deployments typically use 5-12 disks; capacity efficiency improves with more disks (5 disks = 60%, 6 disks = 67%, 8 disks = 75%, 12 disks = 83%). Beyond 12 disks, alternatives like RAID 60 (striped RAID 6 sets) or erasure coding schemes are often preferred. The Reed-Solomon scheme used by most RAID 6 implementations supports up to 255 data disks per stripe (with 2 parity disks), giving the rate-255/257 Reed-Solomon code described in academic literature; in practice, deployments with that many disks are rare due to rebuild times.

Does RAID 6 have a write hole like RAID 5?+

Yes, RAID 6 has the same write hole vulnerability as RAID 5, doubled because two parity blocks must be updated per write. The write hole occurs when a power failure during a multi-disk write leaves data and parity blocks inconsistent; the inconsistency is silent until a disk failure later requires using parity for reconstruction. Mitigation strategies are the same as RAID 5: Battery Backup Unit (BBU) on the RAID controller; Uninterruptible Power Supply (UPS) for the system; copy-on-write file systems like ZFS RAID-Z2 that eliminate the write hole through transactional semantics; forced array resynchronization after improper shutdown. The doubled parity calculation in RAID 6 means write hole exposure is slightly higher than RAID 5 in absolute terms, but the same mitigation approaches apply.

How is data recovered from a failed RAID 6?+

RAID 6 recovery prospects depend on the failure pattern. Single-disk or two-disk failure with healthy remaining drives is recoverable through standard RAID controller rebuild: replace failed disks and let the controller reconstruct via P+Q parity calculations. Three-disk simultaneous failure exceeds RAID 6’s fault tolerance and typically means complete data loss; only specialized professional services with disk imaging may recover partial data. Rebuild URE failure is much less common than in RAID 5 because the Q parity provides a mathematical fallback when surviving disks encounter unreadable sectors during rebuild. Recovery approach for severely-failed RAID 6: stop further operations on the array; image all surviving disks to separate files; use specialized RAID recovery software (R-Studio, ReclaiMe, UFS Explorer) to reassemble the array virtually from images; for unrecoverable scenarios, restore from backup. Comprehensive backups remain essential.

Related glossary entries

  • RAID 5: single distributed parity; RAID 6 extends RAID 5 with a second parity block.
  • RAID 1: mirroring approach to redundancy; alternative for 2-disk redundancy.
  • RAID 0: striping foundation; RAID 6 adds dual distributed parity to RAID 0.
  • RAID: the parent concept; RAID 6 extends the original five RAID levels.
  • ZFS: ZFS RAID-Z2 is the checksum-aware equivalent of hardware RAID 6.
  • Btrfs: Btrfs RAID 6 implementation with checksum-based corruption detection.
  • Cleanroom Recovery: physical damage to RAID 6 disks may require cleanroom work.

About the Authors

👥 Researched & Reviewed By
Rachel Dawson
Rachel Dawson
Technical Approver · Data Recovery Engineer

Rachel brings over twelve years of data recovery engineering experience including substantial work on RAID 6 reconstruction across NetApp RAID-DP, Synology SHR-2, QNAP RAID 6, and various hardware controller implementations. The most consistent pattern in RAID 6 cases is that the configuration delivers on its rebuild-safety promise: dual-disk failures that would be unrecoverable in RAID 5 are routinely recovered without intervention beyond drive replacement. The harder cases involve three-disk failures (exceeding fault tolerance) and controller-specific Reed-Solomon coefficient differences that complicate cross-vendor recovery. The practical recovery sequence works regardless of the specific RAID 6 variant: image all drives before reconstruction, identify the controller’s Reed-Solomon scheme, work from images, and accept that some catastrophic failures cannot be recovered without backups.

12+ years data recovery engineeringRAID 6 reconstructionGalois Field analysis
Editorial Independence & Affiliate Disclosure

Data Recovery Fix earns revenue through affiliate links on some product recommendations. This does not influence our reference content. Glossary entries are written and reviewed independently based on documented research, vendor documentation, independent testing, and recovery-engineer review. If anything on this page looks inaccurate, outdated, or worth revisiting, please reach out at contact@datarecoveryfix.com and we’ll review it promptly.

We will be happy to hear your thoughts

Leave a reply

Data Recovery Fix: Reviews, Comparisons and Tutorials
Logo