Btrfs: Linux’s B-tree Copy-on-Write File System

Btrfs

Linux’s native answer to ZFS. Btrfs (B-tree File System, pronounced “better F S” or “butter F S”) is a copy-on-write file system that ships in the mainline Linux kernel and provides similar features to ZFS without the licensing constraints. Created by Chris Mason at Oracle in 2007, declared stable on-disk in November 2013, Btrfs powers SUSE Linux Enterprise, openSUSE, Fedora Workstation, and Synology DSM by default. Single-disk and RAID 1/10 use cases are stable and widely deployed; RAID 5/6 has long-standing reliability issues that limit production use.

Reference content reviewed by recovery engineers. Editorial standards. About the authors.
📚
8 sources
Wikipedia · btrfs.wiki
LWN · ArchWiki
💻
B-trees + CoW
All metadata in B-trees
Copy-on-write everywhere
📅
Last updated
Kernel 6.x era
📖
8 min
Reading time

Btrfs (B-tree File System) is a modern copy-on-write file system for Linux that combines a file system, logical volume manager, and software RAID into one integrated stack. Created by Chris Mason at Oracle in 2007, merged into the mainline Linux kernel in 2009, and declared stable on-disk in November 2013, Btrfs organizes all data and metadata as B-trees and uses copy-on-write mechanics for all modifications. Btrfs is intended to address the lack of pooling, snapshots, integrity checking, data scrubbing, and integral multi-device spanning in traditional Linux file systems like ext4.

What Btrfs Is

The Wikipedia Btrfs documentation captures the project’s origin and intent: “Btrfs (pronounced as ‘better F S’, ‘butter F S’, ‘b-tree F S’, or ‘B.T.R.F.S.’) is a computer storage format that combines a file system based on the copy-on-write (COW) principle with a logical volume manager (distinct from Linux’s LVM), developed together. It was created by Chris Mason in 2007 for use in Linux, and since November 2013, the file system’s on-disk format has been declared stable in the Linux kernel. Btrfs is intended to address the lack of pooling, snapshots, integrity checking, data scrubbing, and integral multi-device spanning in Linux file systems.”1

The four-pronunciation problem

The community has never quite settled on how to pronounce “Btrfs.” The Wikipedia documentation lists four common variants: “better F S” (the optimistic interpretation), “butter F S” (phonetic), “b-tree F S” (technical), and “B.T.R.F.S.” (literal letter-by-letter). The “butter F S” pronunciation has been the most popular informally; the “better F S” pronunciation reflects the project’s design ambition. Either is acceptable in practice; the file system itself doesn’t care.

The Linux file system gaps it addresses

Btrfs’s design intent reflects specific gaps in Linux’s traditional file system landscape circa 2007:

  • Pooling: ext4 and similar file systems work on single block devices; LVM provided pooling externally but with overhead and complexity.
  • Snapshots: ext4 had no native snapshot support; LVM snapshots existed but were COW-based with performance penalties.
  • Integrity checking: ext4 verified metadata but not data; silent bit rot was undetected.
  • Data scrubbing: no proactive integrity verification was available.
  • Integral multi-device spanning: RAID required md-raid + LVM + ext4 layered together.

Btrfs was designed to provide all of these in a single integrated file system; the design was explicitly modeled on what ZFS had demonstrated was possible.

The Oracle origin and the Mason departure

Chris Mason started Btrfs at Oracle in 2007. After Oracle’s 2010 acquisition of Sun (and inherited ZFS), Mason later moved to Facebook (now Meta) where he continued Btrfs development. The Oracle-acquires-Sun event affected both COW file systems: ZFS’s open-source future became uncertain (leading to OpenZFS), and Btrfs’s most prominent corporate backer had a competing technology. Mason’s move to Meta provided a new corporate sponsor; Meta uses Btrfs at significant scale internally.

Mainline kernel inclusion

Btrfs was merged into the mainline Linux kernel in 2009 (Linux 2.6.29). Unlike ZFS (which can’t be merged due to CDDL/GPL incompatibility), Btrfs is GPL-licensed and ships as a built-in part of the kernel. This is Btrfs’s biggest single advantage over ZFS on Linux: it’s available everywhere Linux runs, no separate kernel modules, no licensing concerns, no DKMS rebuilds. The disadvantage is that Btrfs has historically been less mature; ZFS had a 5-year head start and substantially more production-scale testing.

The deployment landscape

Btrfs’s production footprint:

  • SUSE Linux Enterprise / openSUSE: default file system since 2014.
  • Fedora Workstation: default since Fedora 33 (October 2020).
  • Synology DSM: default for newer NAS models.
  • Meta: internal use at large scale.
  • Many smaller distributions and use cases: Manjaro, Garuda, etc.
  • Red Hat / RHEL: originally deprecated in 2017; this was a significant credibility hit but didn’t stop other distributions from adopting Btrfs.

The B-tree Architecture and CoW Mechanism

Btrfs’s name reflects its foundational data structure: B-trees are used for nearly everything. Understanding the B-tree organization clarifies how Btrfs’s features fit together.2

B-trees as the universal structure

The DeepWiki Btrfs documentation captures the architectural principle: “Btrfs organizes all data and metadata as B-trees, using a copy-on-write mechanism for all modifications. This architecture enables advanced features like snapshots and provides strong data integrity guarantees.” Btrfs maintains multiple B-trees, each serving a specific purpose:

  • Root tree: tracks all other trees in the file system.
  • Extent tree: tracks allocated extents and reference counts.
  • Chunk tree: maps logical addresses to physical disk locations.
  • Device tree: tracks all devices in the file system.
  • Per-subvolume file system trees: contain the actual file and directory metadata.
  • Checksum tree: stores data block checksums.
  • Quota tree: tracks quota usage when qgroups are enabled.

The DeepWiki documentation notes: “The fundamental data structure in Btrfs is the B-tree. All trees share a common node format but differ in their key structure and item content.” The uniform tree node format simplifies the implementation; the same tree-walking code works for all trees.

Copy-on-write mechanics

The Linux Hint Btrfs guide describes the CoW principle: “In a CoW filesystem, when you try to modify data on the filesystem, the filesystem copies the data, modifies the data, and then writes the modified data back to a different free location of the filesystem. The main advantage of the Copy-on-Write (CoW) filesystem is that the data extent it wants to modify is copied to a different location, modified, and stored in a different extent of the filesystem.”3 The CoW process for a typical write:

  1. The application requests a write to an existing file.
  2. Btrfs allocates a new extent in free space.
  3. The new data is written to the new extent.
  4. The B-tree metadata is updated (also via CoW): a new tree path is written that points to the new extent.
  5. The superblock is updated to point to the new tree root.
  6. Only after the superblock update is committed is the old extent considered free.

Why CoW eliminates journaling

The LWN.net introduction to Btrfs captures the crash recovery property: “Since old data is not overwritten, recovery from crashes and power failures should be more straightforward; if a transaction has not completed, the previous state of the data (and metadata) will be where it always was. So, among other things, a COW filesystem does not need to implement a separate journal to provide crash resistance.”4 This is the same property that makes ZFS journal-free; the file system is always in a consistent state on disk because old data is never overwritten until the new data is fully committed. Btrfs’s CoW design replaces the architectural complexity of journaling file systems with cleaner atomic transaction semantics.

The CoW performance trade-off

The LWN documentation captures the cost: “Copying blocks can take more time than simply overwriting them as well as significantly increasing the filesystem’s memory requirements. COW operations will tend to fragment files.” Specific implications:

  • Random writes to large files (databases, VM disk images) generate fragmentation as new extents are allocated.
  • The nodatacow mount option or per-file chattr +C attribute disables CoW for specific files where overwrite-in-place performance is preferable.
  • Online defragmentation can periodically defragment files when fragmentation accumulates.

Database and VM workloads on Btrfs typically use nodatacow; the performance gain is substantial and the integrity loss is acceptable for those use cases.

Checksumming and self-healing

Btrfs checksums all metadata by default and supports data checksumming as well. The Btrfs Read the Docs documentation captures the integrity feature: “Self-healing – checksums for data and metadata, automatic detection of silent data corruptions.” When data is read, Btrfs compares the actual checksum to the stored value; mismatches indicate corruption. With redundancy (RAID 1, RAID 10), Btrfs automatically retrieves the correct data from a redundant copy and rewrites the corrupt copy. The default checksum algorithm is CRC32C; newer kernels also support xxhash, sha256, and blake2 for stronger integrity guarantees at higher CPU cost.

Compression

Btrfs supports transparent compression with three algorithms:

  • zlib: slower but more universally compatible.
  • lzo: fast but lower compression ratios.
  • zstd: the modern recommended choice; good speed and compression.

Compression is enabled per-file-system or per-subvolume via the compress=zstd mount option, or per-file via chattr +c. Modern best practice is to enable zstd compression universally; the CPU overhead is minimal and the space savings on compressible data can be substantial.

Subvolumes, Snapshots, and Reflinks

Btrfs’s subvolume system provides flexible namespace management within a single file system. Combined with snapshots and reflinks, subvolumes form the foundation of Btrfs’s backup and rollback features.5

Subvolumes

The Wondershare Recoverit Btrfs explainer captures the concept: “The Btrfs file system’s root tree branches into binary trees. Each binary tree is a separate namespace that we call a subvolume.” A subvolume is a separately mountable file system within a larger Btrfs file system; subvolumes provide:

  • Independent mount points: each subvolume can be mounted separately with its own mount options.
  • Per-subvolume properties: compression, atime, etc., can vary between subvolumes.
  • Hierarchical organization: subvolumes can contain other subvolumes.
  • Shared free space: all subvolumes in a file system share the underlying capacity.
  • Cross-subvolume operations: snapshots, reflinks, send/receive can work across subvolumes.

Snapshots

Btrfs snapshots are special subvolumes that initially share all data with their source subvolume but diverge over time as either is modified. The LWN documentation describes the snapshot mechanic: “A snapshot is a virtual copy of the filesystem’s contents; it can be created without copying any of the data at all. If, at some later point, a block of data is changed (in either the snapshot or the original), that one block is copied while all of the unchanged data remains shared. Snapshots can be used to provide a sort of ‘time machine’ functionality, or to simply roll back the system after a failed update.”

Btrfs snapshots are writable by default (unlike ZFS read-only snapshots), which is a notable difference from ZFS:

  • Writable snapshots can be modified independently of the source.
  • Modifications to either source or snapshot trigger CoW for the modified blocks.
  • Snapshots can be made read-only with btrfs property set for situations where immutability is desired.
  • Snapshots are atomic and instant; creation takes constant time regardless of file system size.

Reflinks (cp –reflink)

The Wikipedia Btrfs documentation describes the reflink mechanic: “By cloning, the file system does not create a new link pointing to an existing inode; instead, it creates a new inode that initially shares the same disk blocks with the original file.” Reflinks provide a copy mechanism that’s essentially free (no data copied) while giving truly independent files:

  • Hard links are different names for the same file (one inode); reflinks are two files (two inodes) that share data blocks.
  • Modifying either file triggers CoW for the modified blocks; both files diverge transparently.
  • The GNU coreutils 7.5 and later cp --reflink option creates reflinks on Btrfs.
  • VM disk images, container layers, and similar large-file scenarios benefit substantially from reflinks.

Send and receive

Btrfs includes a send/receive replication mechanism similar to ZFS’s:

  • btrfs send generates a stream from a snapshot.
  • btrfs receive applies the stream to a target file system.
  • Incremental sends use a parent snapshot for efficiency.
  • The mechanism supports off-site replication and automated backup workflows.

In-place ext4 conversion

The Linux Hint Btrfs documentation describes a useful capability: “The Btrfs filesystem conversion program reads the metadata of an existing Ext2/3/4 (or ReiserFS) filesystem, creates Btrfs metadata, and stores them on the filesystem. The filesystem keeps both the Btrfs and the Ext2/3/4 (or ReiserFS) metadata. The Btrfs filesystem points to the same file blocks used by the Ext2/3/4 (or ReiserFS) filesystem files. The existing filesystem and data blocks are kept untouched as Btrfs is a Copy-on-Write (CoW) filesystem.” The conversion:

  1. Reads the source ext file system’s metadata.
  2. Builds Btrfs metadata describing the same data layout.
  3. Writes the Btrfs metadata alongside the original ext metadata.
  4. Creates a special “ext2_saved” subvolume containing the original ext metadata for rollback.
  5. The file system can now be mounted as Btrfs with all the original data accessible.

The conversion is reversible until the rollback subvolume is deleted; this provides a safety net for testing Btrfs without committing to the new file system permanently.

Built-in Volume Management and RAID

Btrfs includes integrated volume management and software RAID, eliminating the need for separate LVM and md-raid layers. The volume management is mature; RAID 5/6 has been the long-running point of concern.

Multi-device support

The Btrfs Read the Docs introduction captures the volume features: “Built-in volume management, support for software-based RAID 0, RAID 1, RAID 10 and others.” A Btrfs file system can span multiple devices natively; adding or removing devices is online and requires no unmount. Specific operations:

  • Add device: btrfs device add /dev/sdc /mountpoint adds a new device.
  • Remove device: btrfs device remove /dev/sdc /mountpoint migrates data off and removes.
  • Replace device: btrfs replace start replaces a failing device with a new one.
  • Balance: btrfs balance start redistributes data across devices.
  • Resize: btrfs filesystem resize grows or shrinks the file system.

RAID profiles

Btrfs supports multiple RAID profiles that can be set independently for data and metadata:

ProfileMin disksSurvivesProduction status
single1NoneStable
dup1Single device onlyStable; metadata default
RAID 02None (striped)Stable
RAID 121 disk failureStable; widely used
RAID 1c3 / 1c43 / 42 / 3 disk failuresStable (newer)
RAID 104DependsStable
RAID 531 disk failureNot recommended for production
RAID 642 disk failuresNot recommended for production

The RAID 5/6 issue

Btrfs RAID 5 and RAID 6 have long-standing reliability issues that the upstream documentation flags as concerns for production use. Specific issues include:

  • Write hole: like classical RAID 5/6, Btrfs’s implementation can produce inconsistent stripes during power failures. Despite Btrfs’s CoW design generally eliminating write holes, the RAID 5/6 implementation has not solved this problem.
  • Multi-disk failure handling: edge cases during recovery from multiple-disk failures have produced data loss.
  • Scrub and balance interactions: certain combinations have produced unexpected results.

The upstream guidance has been clear for years: don’t use Btrfs RAID 5 or RAID 6 for production data without understanding the risks. Users wanting parity-based RAID typically choose either md-raid (Linux’s traditional software RAID) with Btrfs single-disk on top, or ZFS which has a robust RAID-Z implementation.

RAID 1 and RAID 1c3/1c4

Btrfs RAID 1 stores 2 copies of every block; newer Btrfs adds RAID 1c3 (3 copies) and RAID 1c4 (4 copies) for higher redundancy:

  • RAID 1 with 2 disks: standard mirror; survives 1 disk failure; usable capacity 50%.
  • RAID 1c3 with 3+ disks: 3-way mirror; survives 2 disk failures; usable capacity 33%.
  • RAID 1c4 with 4+ disks: 4-way mirror; survives 3 disk failures; usable capacity 25%.
  • Combined with metadata-only RAID 1c3 + data RAID 1 for redundancy with capacity efficiency.

Btrfs RAID 1’s behavior with more than 2 disks is different from traditional RAID 1: data is mirrored across pairs of disks (not all disks), which provides RAID 10-like properties without explicit RAID 10 configuration.

SSD optimization

The Btrfs Read the Docs introduction notes: “SSD/NVMe (flash storage) awareness, TRIM/Discard for reporting free blocks for reuse and optimizations (e.g. avoiding unnecessary seek optimizations, sending writes in clusters.” Btrfs detects rotational vs SSD storage automatically and adjusts behaviors accordingly. SSD-aware optimizations are particularly valuable for Btrfs’s CoW workload; the constant new-extent allocation pattern interacts well with TRIM-aware flash management.

Btrfs and Data Recovery

Btrfs recovery has its own toolset and conventions that differ from both traditional Linux file systems and ZFS. The recovery work depends heavily on what’s actually damaged.

What CoW makes irrelevant

Like ZFS, Btrfs’s CoW design eliminates several traditional recovery concerns:

  • Crash recovery: the file system is always consistent on disk; crashes are recovered by simply replaying the log if needed, in seconds.
  • fsck-after-crash: not needed because CoW maintains consistency.
  • Bit rot recovery: automatic on RAID 1/10 file systems with checksums.
  • Partial-write corruption: impossible due to atomic transactions.

btrfs scrub: proactive integrity verification

The btrfs scrub start command verifies every block in the file system against its checksum, repairing corruption from redundant copies where possible. Best practice is to schedule monthly scrubs on production Btrfs systems; regular scrubs catch corruption before it accumulates beyond what redundancy can repair. Scrub is to Btrfs what zpool scrub is to ZFS: the proactive verification that turns silent corruption into automatic repair.

btrfs check (formerly btrfsck)

The btrfs check command (still aliased as btrfsck for compatibility) is the equivalent of fsck for Btrfs. It examines the file system structure for inconsistencies. The standard usage:

  • Read-only check: btrfs check /dev/sdX reports issues without modifying anything. This is safe.
  • Repair attempts: btrfs check --repair attempts to fix issues. The upstream documentation includes strong warnings against using –repair on production data because it can make corruption worse.
  • Specific repair modes: –init-csum-tree, –init-extent-tree, etc., target specific structures and have similar warnings.

btrfs restore: the safe recovery path

For severely damaged Btrfs file systems, btrfs restore is typically the safer recovery path. It extracts files from a damaged file system to another location without modifying the original:

  1. btrfs restore /dev/damaged-fs /path/to/recovery-target extracts files.
  2. Various options control what’s restored (specific files, with or without metadata, etc.).
  3. The original damaged file system is unchanged.
  4. Recovered files are written to a clean destination.

btrfs restore is generally preferred over btrfs check –repair for severe damage scenarios; the read-only approach can’t make things worse.

btrfs-rescue subcommands

For specific failure modes, the btrfs-rescue suite provides targeted tools:

  • btrfs rescue zero-log: clears the file system’s log if log replay is causing mount failures.
  • btrfs rescue super-recover: rebuilds the superblock from backup copies if the primary is damaged.
  • btrfs rescue chunk-recover: attempts to reconstruct chunk tree from data scanning when the chunk tree is damaged.
  • btrfs rescue fix-device-size: fixes device size mismatches.

These are advanced tools that require care; running them without understanding can damage the file system further.

Mount-time recovery options

Several mount options can help with recovery scenarios:

  • -o ro,recovery,nologreplay: read-only mount that skips log replay; useful for accessing files from a file system that won’t mount normally.
  • -o usebackuproot: uses backup superblock copies if the primary is damaged.
  • -o degraded: mounts a multi-device file system with missing devices.
  • -o skip_balance: skips balance operations during mount if a previous balance is interrupted.

When to involve professionals

Btrfs recovery becomes professional-territory when:

  • Multiple disks have failed in a RAID configuration beyond redundancy.
  • RAID 5/6 file systems have failed (these are particularly hard to recover from).
  • btrfs check –repair has been run and made things worse.
  • Hardware failures affect the file system metadata structures.
  • Encryption has been used and keys are lost.

Professional services with Btrfs expertise are scarcer than ZFS expertise; few recovery vendors have deep Btrfs knowledge. Cleanroom recovery applies for physical damage; the file system layer is recovered separately after physical access is restored.

Btrfs brings ZFS-class features to mainstream Linux without the licensing constraints, making COW file system features available to every Linux user without separate kernel modules or DKMS rebuilds. For single-disk and RAID 1/10 use cases, Btrfs is reliable and widely deployed; the SUSE Linux Enterprise, Fedora, and Synology deployments demonstrate that Btrfs handles production workloads well at those configurations. RAID 5/6 remains the major caveat; users wanting parity-based protection should look elsewhere (md-raid + Btrfs, or ZFS via OpenZFS).6

For users wondering whether to choose Btrfs or ZFS, the practical answer depends on the platform and workload. Linux desktop users wanting CoW features typically benefit from Btrfs’s mainline integration; Fedora’s default Btrfs setup works well out of the box. Linux server users with parity RAID requirements typically benefit from ZFS via OpenZFS despite the licensing complexity; the RAID-Z implementation is robust where Btrfs RAID 5/6 isn’t. NAS appliance users typically choose whatever the vendor ships (Synology Btrfs, TrueNAS ZFS); both work well for their target use cases. Large-scale Linux deployments at companies like Meta have made Btrfs work at significant scale; the file system is genuinely production-ready at the appropriate configurations.

For users facing potential Btrfs data loss, the practical guidance is consistent. Stop modifying the file system immediately. Try mount-time recovery options first (-o ro,recovery,nologreplay); these are safe and often provide enough access to copy data off. Avoid btrfs check --repair on production data; the upstream warnings are real, and repair attempts can make things worse. Use btrfs restore for severely damaged file systems; the read-only extraction approach is safer than repair attempts. Recovery software for traditional file systems doesn’t typically handle Btrfs well; specialized tools or professional services with Btrfs expertise are usually needed for non-trivial scenarios. Comprehensive backups remain essential; Btrfs’s snapshots provide cheap rollback for user-error scenarios but don’t substitute for off-site backups against site-level disasters or rare software bugs.

Btrfs FAQ

What is Btrfs?+

Btrfs (B-tree File System, pronounced ‘better F S’ or ‘butter F S’) is a modern copy-on-write file system for Linux that combines a file system, logical volume manager, and software RAID into one integrated stack. It was created by Chris Mason at Oracle in 2007 and was merged into the mainline Linux kernel in 2009; the on-disk format was declared stable in November 2013. Btrfs organizes all data and metadata as B-trees and uses copy-on-write mechanics for all modifications. Btrfs is the default file system on SUSE Linux Enterprise, openSUSE, Fedora Workstation (since Fedora 33), and Synology DSM.

How does Btrfs differ from ZFS?+

Btrfs and ZFS share many design principles (copy-on-write, integrated volume management, snapshots, checksumming) but differ substantially in implementation and licensing. Licensing: Btrfs is GPL-licensed and ships in the mainline Linux kernel; ZFS is CDDL-licensed and ships as a separate kernel module. Architecture: Btrfs stores everything in B-trees with a single transaction model; ZFS uses an object-based design with vdevs and pools. RAID: ZFS RAID-Z is robust and write-hole-free; Btrfs RAID 0/1/10 is stable but RAID 5/6 has long-running write hole and other reliability issues. Maturity: Btrfs has stable single-device and RAID 1/10 use cases but historically more rough edges; ZFS has more comprehensive testing in production at scale.

What are Btrfs subvolumes?+

A Btrfs subvolume is a separately mountable file system within a larger Btrfs file system; subvolumes are roughly analogous to ZFS datasets. Each subvolume has its own root directory and can be mounted independently with its own mount options (compression, atime, etc.). Subvolumes are organized in a tree under the file system root, with the top-level subvolume containing all others by default. Subvolumes are space-shared (all subvolumes share the underlying file system’s free space) but namespace-separate (files in one subvolume aren’t directly visible from another). They form the basis for Btrfs’s snapshot mechanism: a snapshot is a special subvolume that initially shares all data with its source but diverges over time as either is modified.

What is the Btrfs RAID 5/6 issue?+

Btrfs implements software RAID 0, 1, 10, 5, and 6, but RAID 5 and RAID 6 have long-standing reliability issues that the upstream documentation has flagged as problems for production use. The specific concerns include write hole vulnerabilities (similar to traditional RAID 5/6 but not solved by Btrfs’s CoW design as ZFS RAID-Z solves it), incomplete handling of multi-disk failures during specific recovery scenarios, and edge cases in scrub and balance operations. RAID 0, RAID 1, and RAID 10 in Btrfs are stable and widely deployed in production. Users wanting parity-based RAID on Linux are typically directed to either md-raid (Linux software RAID) with Btrfs on top, or to ZFS via OpenZFS, both of which provide more reliable parity protection than Btrfs’s native RAID 5/6.

Can I convert ext4 to Btrfs in place?+

Yes, Btrfs includes a conversion tool that converts existing ext2/ext3/ext4 file systems to Btrfs in place without copying data. The Linux Hint Btrfs guide describes the mechanism: the conversion program reads the metadata of an existing ext file system, creates Btrfs metadata, and stores both side by side; the Btrfs metadata points to the same data blocks the ext file system used. The original ext data is preserved as a special subvolume that can be used to roll back the conversion if needed. Once the conversion is verified, the rollback subvolume can be deleted to free its space. The conversion is reversible until the rollback subvolume is removed; this provides safety in case Btrfs has issues with the converted file system.

How is Btrfs data recovered?+

Btrfs recovery uses tools specifically designed for the file system. For mountable but corrupted file systems, btrfs scrub validates and repairs checksum mismatches when redundancy is available. For unmountable file systems, btrfs check (formerly btrfsck) can identify and sometimes repair structural damage, though the upstream documentation includes strong warnings against using btrfs check –repair on production systems because it can make corruption worse. The btrfs restore command extracts files from a damaged file system to another location without modifying the original; this is typically the safer recovery path. For severe scenarios, btrfs-rescue subcommands (zero-log, super-recover, chunk-recover) target specific failure modes; these are advanced tools requiring care. Mountable read-only with -o recovery,ro,nologreplay can sometimes provide enough access to copy data off.

Related glossary entries

  • ZFS: the older COW file system Btrfs was conceptually modeled on.
  • Journaling File System: the architectural pattern Btrfs replaces with copy-on-write.
  • ext4: traditional Linux file system; Btrfs offers in-place conversion.
  • Inode: per-file metadata structure; Btrfs uses dynamic inode allocation.
  • RAID: Btrfs implements software RAID 0/1/10 stably; RAID 5/6 has issues.
  • Volume: Btrfs subvolumes are conceptually similar to volume management.
  • Cleanroom Recovery: physical drive damage requires cleanroom regardless of file system.

Sources

  1. Wikipedia: Btrfs (accessed May 2026)
  2. DeepWiki: Btrfs Filesystem (Linux kernel docs)
  3. Linux Hint: Introduction to Btrfs Filesystem
  4. LWN.net: The Btrfs filesystem: An introduction
  5. Wondershare Recoverit: Btrfs File System: Definition, Features, Pros & Cons
  6. Btrfs Documentation: BTRFS Introduction

About the Authors

👥 Researched & Reviewed By
Rachel Dawson
Rachel Dawson
Technical Approver · Data Recovery Engineer

Rachel brings over twelve years of data recovery engineering experience including Btrfs recovery scenarios. The most consistent pattern in Btrfs cases involves users running btrfs check –repair when the upstream documentation explicitly warned against it; the repair attempts often turn recoverable corruption into unrecoverable damage. The “btrfs restore is safer than btrfs check –repair” guidance is real and important. RAID 5/6 Btrfs failures are genuinely difficult to recover from; users encountering them are typically directed to professional services with Btrfs expertise, of which there are relatively few. Snapshots provide easy user-error recovery; comprehensive snapshot rotation plus off-site send/receive replicas eliminate most data loss scenarios.

12+ years data recovery engineeringBtrfs recoveryLinux file systems
Editorial Independence & Affiliate Disclosure

Data Recovery Fix earns revenue through affiliate links on some product recommendations. This does not influence our reference content. Glossary entries are written and reviewed independently based on documented research, vendor documentation, independent testing, and recovery-engineer review. If anything on this page looks inaccurate, outdated, or worth revisiting, please reach out at contact@datarecoveryfix.com and we’ll review it promptly.

We will be happy to hear your thoughts

Leave a reply

Data Recovery Fix: Reviews, Comparisons and Tutorials
Logo