注意:

The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.

Difference between revisions of "BTRFS Fun"

From Funtoo
Jump to navigation Jump to search
(fixed link)
 
(29 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
{{Article}}
{{fancyimportant|BTRFS is still '''experimental''' even with latest Linux kernels (3.4-rc at date of writing) so be prepared to lose some data sooner or later or hit a severe issue/regressions/"itchy" bugs. Subliminal message: '''Do not put critical data on BTRFS partitions'''.}}
{{fancyimportant|BTRFS is still '''experimental''' even with latest Linux kernels (3.4-rc at date of writing) so be prepared to lose some data sooner or later or hit a severe issue/regressions/"itchy" bugs. Subliminal message: '''Do not put critical data on BTRFS partitions'''.}}
   
   
Line 34: Line 34:
The test bench uses disk images through loopback devices. Of course, in a real world case, you will use local drives or units though a SAN. To start with, 5 devices of 1 GiB are allocated:
The test bench uses disk images through loopback devices. Of course, in a real world case, you will use local drives or units though a SAN. To start with, 5 devices of 1 GiB are allocated:


<pre>
<console>
# dd if=/dev/zero of=/tmp/btrfs-vol0.img bs=1G count=1
###i## dd if=/dev/zero of=/tmp/btrfs-vol0.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol1.img bs=1G count=1
###i## dd if=/dev/zero of=/tmp/btrfs-vol1.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol2.img bs=1G count=1
###i## dd if=/dev/zero of=/tmp/btrfs-vol2.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol3.img bs=1G count=1
###i## dd if=/dev/zero of=/tmp/btrfs-vol3.img bs=1G count=1
# dd if=/dev/zero of=/tmp/btrfs-vol4.img bs=1G count=1
###i## dd if=/dev/zero of=/tmp/btrfs-vol4.img bs=1G count=1
</pre>
</console>


Then attached:
Then attached:


<pre>
<console>
# losetup /dev/loop0 /tmp/btrfs-vol0.img
###i## losetup /dev/loop0 /tmp/btrfs-vol0.img
# losetup /dev/loop1 /tmp/btrfs-vol1.img
###i## losetup /dev/loop1 /tmp/btrfs-vol1.img
# losetup /dev/loop2 /tmp/btrfs-vol2.img
###i## losetup /dev/loop2 /tmp/btrfs-vol2.img
# losetup /dev/loop3 /tmp/btrfs-vol3.img
###i## losetup /dev/loop3 /tmp/btrfs-vol3.img
# losetup /dev/loop4 /tmp/btrfs-vol4.img
###i## losetup /dev/loop4 /tmp/btrfs-vol4.img
</pre>
</console>


== Creating the initial volume (pool) ==
== Creating the initial volume (pool) ==
Line 62: Line 62:
To create a BTRFS volume made of multiple devices with default options, use:
To create a BTRFS volume made of multiple devices with default options, use:


<pre>
<console>
# mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2  
###i## mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2  
</pre>
</console>


To create a BTRFS volume made of a single device with a single copy of the metadata (dangerous!), use:
To create a BTRFS volume made of a single device with a single copy of the metadata (dangerous!), use:


<pre>
<console>
# mkfs.btrfs -m single /dev/loop0
###i## mkfs.btrfs -m single /dev/loop0
</pre>
</console>


To create a BTRFS volume made of multiple devices with metadata spread amongst all of the devices, use:
To create a BTRFS volume made of multiple devices with metadata spread amongst all of the devices, use:


<pre>
<console>
# mkfs.btrfs -m raid0 /dev/loop0 /dev/loop1 /dev/loop2  
###i## mkfs.btrfs -m raid0 /dev/loop0 /dev/loop1 /dev/loop2  
</pre>
</console>


To create a BTRFS volume made of multiple devices, with metadata spread amongst all of the devices and data mirrored on all of the devices (you probably don't want this in a real setup), use:
To create a BTRFS volume made of multiple devices, with metadata spread amongst all of the devices and data mirrored on all of the devices (you probably don't want this in a real setup), use:


<pre>
<console>
# mkfs.btrfs -m raid0 -d raid1 /dev/loop0 /dev/loop1 /dev/loop2  
###i## mkfs.btrfs -m raid0 -d raid1 /dev/loop0 /dev/loop1 /dev/loop2  
</pre>
</console>


To create a fully redundant BTRFS volume (data and metadata mirrored amongst all of the devices), use:
To create a fully redundant BTRFS volume (data and metadata mirrored amongst all of the devices), use:


<pre>
<console>
# mkfs.btrfs -d raid1 /dev/loop0 /dev/loop1 /dev/loop2  
###i## mkfs.btrfs -d raid1 /dev/loop0 /dev/loop1 /dev/loop2  
</pre>
</console>


Technically you can use anything as a physical volume: you can have a volume composed of 2 local hard drives, 3 USB keys, 1 loopback device pointing to a file on a NFS share and 3 logical devices accessed through your SAN (you would be an idiot, but you can, nevertheless). Having different physical volume sizes would lead to issues, but it works :-).
{{Fancynote|Technically you can use anything as a physical volume: you can have a volume composed of 2 local hard drives, 3 USB keys, 1 loopback device pointing to a file on a NFS share and 3 logical devices accessed through your SAN (you would be an idiot, but you can, nevertheless). Having different physical volume sizes would lead to issues, but it works :-).}}


== Checking the initial volume ==
== Checking the initial volume ==
Line 96: Line 96:
To verify the devices of which BTRFS volume is composed, just use '''btrfs-show ''device'' ''' (old style) or '''btrfs filesystem show ''device'' ''' (new style). You need to specify one of the devices (the metadata has been designed to keep a track of the what device is linked what other device). If the initial volume was set up like this:
To verify the devices of which BTRFS volume is composed, just use '''btrfs-show ''device'' ''' (old style) or '''btrfs filesystem show ''device'' ''' (new style). You need to specify one of the devices (the metadata has been designed to keep a track of the what device is linked what other device). If the initial volume was set up like this:


<pre>
<console>
# mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2
###i## mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2


WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
Line 107: Line 107:
         nodesize 4096 leafsize 4096 sectorsize 4096 size 3.00GB
         nodesize 4096 leafsize 4096 sectorsize 4096 size 3.00GB
Btrfs Btrfs v0.19
Btrfs Btrfs v0.19
</pre>
</console>


It can be checked with one of these commands (They are equivalent):
It can be checked with one of these commands (They are equivalent):


<pre>
<console>
# btrfs filesystem show /dev/loop0
###i## btrfs filesystem show /dev/loop0
# btrfs filesystem show /dev/loop1
###i## btrfs filesystem show /dev/loop1
# btrfs filesystem show /dev/loop2
###i## btrfs filesystem show /dev/loop2
</pre>
</console>


The result is the same for all commands:
The result is the same for all commands:
Line 129: Line 129:
To show all of the volumes that are present:
To show all of the volumes that are present:


<pre>
<console>
# btrfs filesystem show
###i## btrfs filesystem show
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
         Total devices 3 FS bytes used 28.00KB
         Total devices 3 FS bytes used 28.00KB
Line 144: Line 144:
         Total devices 1 FS bytes used 180.14GB
         Total devices 1 FS bytes used 180.14GB
         devid    1 size 406.02GB used 338.54GB path /dev/sda4
         devid    1 size 406.02GB used 338.54GB path /dev/sda4
</pre>
</console>


BTRFS wiki mentions that '''btrfs device scan''' should be performed, consequence of not doing the incantation? Volume not seen?
{{Fancywarning|BTRFS wiki mentions that '''btrfs device scan''' should be performed, consequence of not doing the incantation? Volume not seen?}}


== Mounting the initial volume ==
== Mounting the initial volume ==
Line 152: Line 152:
BTRFS volumes can be mounted like any other filesystem. The cool stuff at the top on the sundae is that the design of the BTRFS metadata makes it possible to use any of the volume devices. The following commands are equivalent:
BTRFS volumes can be mounted like any other filesystem. The cool stuff at the top on the sundae is that the design of the BTRFS metadata makes it possible to use any of the volume devices. The following commands are equivalent:


<pre>
<console>
# mount /dev/loop0 /mnt
###i## mount /dev/loop0 /mnt
# mount /dev/loop1 /mnt
###i## mount /dev/loop1 /mnt
# mount /dev/loop2 /mnt
###i## mount /dev/loop2 /mnt
</pre>
</console>


For every physical device used for mounting the BTRFS volume '''df -h''' reports the same (in all cases 3 GiB of "free" space is reported):
For every physical device used for mounting the BTRFS volume <tt>df -h</tt> reports the same (in all cases 3 GiB of "free" space is reported):


<pre>
<console>
# df -h
###i## df -h
Filesystem      Size  Used Avail Use% Mounted on
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop1      3.0G  56K  1.8G  1% /mnt
/dev/loop1      3.0G  56K  1.8G  1% /mnt
</pre>
</console>


The following command prints very useful information (like how the BTRFS volume has been created):
The following command prints very useful information (like how the BTRFS volume has been created):
<pre>
<console>
# btrfs filesystem df /mnt       
###i## btrfs filesystem df /mnt       
Data, RAID0: total=409.50MB, used=0.00
Data, RAID0: total=409.50MB, used=0.00
Data: total=8.00MB, used=0.00
Data: total=8.00MB, used=0.00
Line 175: Line 175:
Metadata, RAID1: total=204.75MB, used=28.00KB
Metadata, RAID1: total=204.75MB, used=28.00KB
Metadata: total=8.00MB, used=0.00
Metadata: total=8.00MB, used=0.00
</pre>
</console>
By the way, as you can see, for the btrfs command the mount point should be specified, not one of the physical devices.
By the way, as you can see, for the btrfs command the mount point should be specified, not one of the physical devices.


Line 182: Line 182:
A common practice in system administration is to leave some head space, instead of using the whole capacity of a storage pool (just in case). With btrfs one can easily shrink volumes. Let's shrink the volume a bit (about 25%):
A common practice in system administration is to leave some head space, instead of using the whole capacity of a storage pool (just in case). With btrfs one can easily shrink volumes. Let's shrink the volume a bit (about 25%):


<pre>
<console>
# btrfs filesystem resize -500m /mnt
###i## btrfs filesystem resize -500m /mnt
# dh -h
###i## dh -h
/dev/loop1      2.6G  56K  1.8G  1% /mnt
/dev/loop1      2.6G  56K  1.8G  1% /mnt
</pre>
</console>
    
    
And yes, it is an on-line resize, there is no need to umount/shrink/mount. So no downtimes! :-) However, a BTRFS volume requires a minimal size... if the shrink is too aggressive the volume won't be resized:
And yes, it is an on-line resize, there is no need to umount/shrink/mount. So no downtimes! :-) However, a BTRFS volume requires a minimal size... if the shrink is too aggressive the volume won't be resized:


<pre>
<console>
# btrfs filesystem resize -1g /mnt   
###i## btrfs filesystem resize -1g /mnt   
Resize '/mnt' of '-1g'
Resize '/mnt' of '-1g'
ERROR: unable to resize '/mnt'
ERROR: unable to resize '/mnt'
</pre>
</console>


== Growing the volume ==
== Growing the volume ==
Line 200: Line 200:
This is the opposite operation, you can make a BTRFS grow to reach a particular size (e.g. 150 more megabytes):
This is the opposite operation, you can make a BTRFS grow to reach a particular size (e.g. 150 more megabytes):


<pre>
<console>
# btrfs filesystem resize +150m /mnt
###i## btrfs filesystem resize +150m /mnt
Resize '/mnt' of '+150m'
Resize '/mnt' of '+150m'
# dh -h
###i## dh -h
/dev/loop1      2.7G  56K  1.8G  1% /mnt
/dev/loop1      2.7G  56K  1.8G  1% /mnt
</pre>
</console>


You can also take an ''"all you can eat"'' approach via the '''max''' option, meaning all of the possible space will be used for the volume:
You can also take an ''"all you can eat"'' approach via the '''max''' option, meaning all of the possible space will be used for the volume:


<pre>
<console>
# btrfs filesystem resize max /mnt
###i## btrfs filesystem resize max /mnt
Resize '/mnt' of 'max'
Resize '/mnt' of 'max'
# dh -h
###i## dh -h
/dev/loop1      3.0G  56K  1.8G  1% /mnt
/dev/loop1      3.0G  56K  1.8G  1% /mnt
</pre>
</console>


== Adding a new device to the BTRFS volume ==
== Adding a new device to the BTRFS volume ==
Line 220: Line 220:
To add a new device to the volume:
To add a new device to the volume:


<pre>
<console>
# btrfs device add /dev/loop4 /mnt  
###i## btrfs device add /dev/loop4 /mnt  
oxygen ~ # btrfs filesystem show /dev/loop4  
oxygen ~ # btrfs filesystem show /dev/loop4  
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Line 229: Line 229:
         devid    1 size 1.00GB used 275.94MB path /dev/loop0
         devid    1 size 1.00GB used 275.94MB path /dev/loop0
         devid    2 size 1.00GB used 110.38MB path /dev/loop1  
         devid    2 size 1.00GB used 110.38MB path /dev/loop1  
</pre>
</console>


Again, no need to umount the volume first as adding a device is an on-line operation (the device has no space used yet hence the '0.00'). The operation is not finished as we must tell btrfs to prepare the new device (i.e. rebalance/mirror the metadata and the data between all devices):
Again, no need to umount the volume first as adding a device is an on-line operation (the device has no space used yet hence the '0.00'). The operation is not finished as we must tell btrfs to prepare the new device (i.e. rebalance/mirror the metadata and the data between all devices):


<pre>
<console>
# btrfs filesystem balance /mnt
###i## btrfs filesystem balance /mnt
# btrfs filesystem show /dev/loop4
###i## btrfs filesystem show /dev/loop4
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
         Total devices 4 FS bytes used 28.00KB
         Total devices 4 FS bytes used 28.00KB
Line 242: Line 242:
         devid    1 size 1.00GB used 378.38MB path /dev/loop0
         devid    1 size 1.00GB used 378.38MB path /dev/loop0
         devid    2 size 1.00GB used 110.38MB path /dev/loop1
         devid    2 size 1.00GB used 110.38MB path /dev/loop1
</pre>
</console>


Depending on the sizes and what is in the volume a balancing operation could take several minutes or hours.
{{Fancynote|Depending on the sizes and what is in the volume a balancing operation could take several minutes or hours.}}


== Removing a device from the BTRFS volume ==
== Removing a device from the BTRFS volume ==


<pre>
<console>
# btrfs device delete /dev/loop2 /mnt
###i## btrfs device delete /dev/loop2 /mnt
# btrfs filesystem show /dev/loop0   
###i## btrfs filesystem show /dev/loop0   
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
         Total devices 4 FS bytes used 28.00KB
         Total devices 4 FS bytes used 28.00KB
Line 257: Line 257:
         devid    2 size 1.00GB used 0.00 path /dev/loop1
         devid    2 size 1.00GB used 0.00 path /dev/loop1
         *** Some devices missing
         *** Some devices missing
# df -h
###i## df -h
Filesystem      Size  Used Avail Use% Mounted on
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop1      3.0G  56K  1.5G  1% /mnt
/dev/loop1      3.0G  56K  1.5G  1% /mnt
</pre>
</console>


Here again removing a device is totally dynamic and can be done as on-line operation! Note that when a device is removed, its content is transparently redistributed among the other devices.
Here again, removing a device is totally dynamic and can be done as an on-line operation! Note that when a device is removed, its content is transparently redistributed among the other devices.


Obvious points:
Obvious points:
Line 270: Line 270:
Once you add a new device to the BTRFS volume as a replacement for a removed one, you can cleanup the references to the missing device:
Once you add a new device to the BTRFS volume as a replacement for a removed one, you can cleanup the references to the missing device:


<pre>
<console>
# btrfs device delete missing
###i## btrfs device delete missing
</pre>  
</console>


== Using a BTRFS volume in degraded mode ==
== Using a BTRFS volume in degraded mode ==
Line 280: Line 280:
If you use raid1 or raid10 for data AND metadata and you have a usable submirror accessible (consisting of 1 drive in case of RAID1 or the two drive of the same RAID0 array in case of RAID10), you can mount the array in degraded mode in the case of some devices are missing (e.g. dead SAN link or dead drive) :
If you use raid1 or raid10 for data AND metadata and you have a usable submirror accessible (consisting of 1 drive in case of RAID1 or the two drive of the same RAID0 array in case of RAID10), you can mount the array in degraded mode in the case of some devices are missing (e.g. dead SAN link or dead drive) :


<pre>
<console>
# mount -o degraded /dev/loop0 /mnt
###i## mount -o degraded /dev/loop0 /mnt
</pre>  
</console>  


If you use RAID0 (and have one of your drives inaccessible) the metadata or RAID10 but not enough drives are on-line to even get a degraded mode possible, btrfs will refuse to mount the volume:
If you use RAID0 (and have one of your drives inaccessible) the metadata or RAID10 but not enough drives are on-line to even get a degraded mode possible, btrfs will refuse to mount the volume:


<pre>
<console>
# mount /dev/loop0 /mnt
###i## mount /dev/loop0 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
       missing codepage or helper program, or other error
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       In some cases useful info is found in syslog - try
       dmesg | tail  or so
       dmesg | tail  or so
</pre>
</console>


The situation is no better if you have used RAID1 for the metadata and RAID0 for the data, you can mount the drive in degraded mode but you will encounter problems while accessing your files:
The situation is no better if you have used RAID1 for the metadata and RAID0 for the data, you can mount the drive in degraded mode but you will encounter problems while accessing your files:


<pre>
<console>
# cp /mnt/test.dat /tmp  
###i## cp /mnt/test.dat /tmp  
cp: reading `/mnt/test.dat': Input/output error
cp: reading `/mnt/test.dat': Input/output error
cp: failed to extend `/tmp/test.dat': Input/output error
cp: failed to extend `/tmp/test.dat': Input/output error
</pre>
</console>


= Playing with subvolumes and snapshots =
= Playing with subvolumes and snapshots =
Line 376: Line 376:
Maybe you have a question: "Okay, What is the difference between a directory and a subvolume? Both can can contain something!". To further confuse you, here is what users get if they reproduce the first level hierarchy on a real machine:
Maybe you have a question: "Okay, What is the difference between a directory and a subvolume? Both can can contain something!". To further confuse you, here is what users get if they reproduce the first level hierarchy on a real machine:


<pre>
<console>
# ls -l
###i## ls -l
total 0
total 0
drwx------ 1 root root 0 May 23 12:48 SV1
drwx------ 1 root root 0 May 23 12:48 SV1
Line 383: Line 383:
-rw-r--r-- 1 root root 0 May 23 12:48 F1
-rw-r--r-- 1 root root 0 May 23 12:48 F1
drwx------ 1 root root 0 May 23 12:48 SV2
drwx------ 1 root root 0 May 23 12:48 SV2
</pre>
</console>


Although subvolumes SV1 and SV2 have been created with special BTRFS commands they appear just as if they were ordinary directories! A subtle nuance exists, however: think again at our boxes analogy we did before and map the following concepts in the following manner:
Although subvolumes SV1 and SV2 have been created with special BTRFS commands they appear just as if they were ordinary directories! A subtle nuance exists, however: think again at our boxes analogy we did before and map the following concepts in the following manner:
Line 393: Line 393:
So, in the internal filesystem metadata SV1 and SV2 are stored in a different manner than D1 (although this is transparently handled for users). You can, however see SV1 and SV2 for what they are (subvolumes) by running the following command (subvolume numbered (0) has been mounted on /mnt):
So, in the internal filesystem metadata SV1 and SV2 are stored in a different manner than D1 (although this is transparently handled for users). You can, however see SV1 and SV2 for what they are (subvolumes) by running the following command (subvolume numbered (0) has been mounted on /mnt):


<pre>
<console>
# btrfs subvolume list /mnt
###i## btrfs subvolume list /mnt
ID 258 top level 5 path SV1
ID 258 top level 5 path SV1
ID 259 top level 5 path SV2
ID 259 top level 5 path SV2
</pre>
</console>


What would we get if we create SV21 and SV22 inside of SV2? Let's try! Before going further you should be aware that a subvolume is created by invoking the magic command '''btrfs subvolume create''':
What would we get if we create SV21 and SV22 inside of SV2? Let's try! Before going further you should be aware that a subvolume is created by invoking the magic command '''btrfs subvolume create''':


<pre>
<console>
# cd /mnt/SV2
###i## cd /mnt/SV2
# btrfs subvolume create SV21
###i## btrfs subvolume create SV21
Create subvolume './SV21'
Create subvolume './SV21'
# btrfs subvolume create SV22
###i## btrfs subvolume create SV22
Create subvolume './SV22'
Create subvolume './SV22'
# btrfs subvolume list /mnt   
###i## btrfs subvolume list /mnt   
ID 258 top level 5 path SV1
ID 258 top level 5 path SV1
ID 259 top level 5 path SV2
ID 259 top level 5 path SV2
ID 260 top level 5 path SV2/SV21
ID 260 top level 5 path SV2/SV21
ID 261 top level 5 path SV2/SV22
ID 261 top level 5 path SV2/SV22
</pre>
</console>


Again, invoking '''ls''' in SVN2 will report the subvolumes as being directories:
Again, invoking '''ls''' in /mnt/SV2 will report the subvolumes as being directories:


<pre>
<console>
# ls -l
###i## ls -l
total 0
total 0
drwx------ 1 root root 0 May 23 13:15 SV21
drwx------ 1 root root 0 May 23 13:15 SV21
drwx------ 1 root root 0 May 23 13:15 SV22
drwx------ 1 root root 0 May 23 13:15 SV22
</pre>
</console>


== Changing the point of view on the subvolumes hierarchy ==
== Changing the point of view on the subvolumes hierarchy ==
Line 427: Line 427:
At some point in our boxes analogy we have talked about what we see and what we don't see depending on our location in the hierarchy. Here lies a big important point: whereas most of the BTRFS users mount the root subvolume (subvolume id = 0, we will retain the ''root subvolume'' terminology) in their VFS hierarchy thus making visible the whole hierarchy contained in the BTRFS volume, it is absolutely possible to mount only a ''subset'' of it. How that could be possible? Simple: Just specify the subvolume number when you invoke mount. For example, to mount the hierarchy in the VFS starting at subvolume SV22 (261) do the following:
At some point in our boxes analogy we have talked about what we see and what we don't see depending on our location in the hierarchy. Here lies a big important point: whereas most of the BTRFS users mount the root subvolume (subvolume id = 0, we will retain the ''root subvolume'' terminology) in their VFS hierarchy thus making visible the whole hierarchy contained in the BTRFS volume, it is absolutely possible to mount only a ''subset'' of it. How that could be possible? Simple: Just specify the subvolume number when you invoke mount. For example, to mount the hierarchy in the VFS starting at subvolume SV22 (261) do the following:


<pre>
<console>
# mount -o subvolid=261 /dev/loop0 /mnt
###i## mount -o subvolid=261 /dev/loop0 /mnt
</pre>
</console>


Here lies an important notion not disclosed in the previous paragraph: although both directories and subvolumes can act as containers, '''only subvolumes can be mounted in a VFS hierarchy'''. It is a fundamental aspect to remember: you cannot mount a sub-part of a subvolume in the VFS; you can only mount the subvolume in itself. Considering the hierarchy schema in the previous section, if you want to access the directory D3 you have three possibilities:
Here lies an important notion not disclosed in the previous paragraph: although both directories and subvolumes can act as containers, '''only subvolumes can be mounted in a VFS hierarchy'''. It is a fundamental aspect to remember: you cannot mount a sub-part of a subvolume in the VFS; you can only mount the subvolume in itself. Considering the hierarchy schema in the previous section, if you want to access the directory D3 you have three possibilities:
Line 435: Line 435:
# Mount the non-named subvolume (numbered 0) and access D3 through /mnt/SV2/SV22/D3 if the non-named subvolume is mounted in /mnt
# Mount the non-named subvolume (numbered 0) and access D3 through /mnt/SV2/SV22/D3 if the non-named subvolume is mounted in /mnt
# Mount the subvolume SV2 (numbered 259) and access D3 through /mnt/SV22/D3 if the the subvolume SV2 is mounted in /mnt
# Mount the subvolume SV2 (numbered 259) and access D3 through /mnt/SV22/D3 if the the subvolume SV2 is mounted in /mnt
# Mount the subvolume SV22 (numbered 261) through /mnt/D3 if the the subvolume SV22 is mounted in /mnt
# Mount the subvolume SV22 (numbered 261) and access D3 through /mnt/D3 if the the subvolume SV22 is mounted in /mnt


This is accomplished by the following commands, respectively:
This is accomplished by the following commands, respectively:


<pre>
<console>
# mount -o subvolid=0 /dev/loop0 /mnt
###i## mount -o subvolid=0 /dev/loop0 /mnt
# mount -o subvolid=259 /dev/loop0 /mnt
###i## mount -o subvolid=259 /dev/loop0 /mnt
# mount -o subvolid=261 /dev/loop0 /mnt
###i## mount -o subvolid=261 /dev/loop0 /mnt
</pre>
</console>


{{fancynote|When a subvolume is mounted in the VFS, everything located "above" the subvolume is hidden. Concretely, if you mount the subvolume numbered 261 in /mnt, you only see what is under SV22, you won't see what is located above SV22 like SV21, SV2, D1, SV1, etc. }}
{{fancynote|When a subvolume is mounted in the VFS, everything located "above" the subvolume is hidden. Concretely, if you mount the subvolume numbered 261 in /mnt, you only see what is under SV22, you won't see what is located above SV22 like SV21, SV2, D1, SV1, etc. }}
Line 458: Line 458:
When you create a brand new BTRFS filesystem, the system not only creates the initial the root subvolume (numbered 0) but also tags it as being the '''default subvolume'''. When you ask the operating system to mount a subvolume contained in a BTRFS volume without specifying a subvolume number, it determines which of the existing subvolumes has been tagged as "default subvolume" and mounts it. If none of the exiting subvolumes has the tag "default subvolume" (e.g. because the default subvolume has been deleted), the mount command gives up with a rather cryptic message:
When you create a brand new BTRFS filesystem, the system not only creates the initial the root subvolume (numbered 0) but also tags it as being the '''default subvolume'''. When you ask the operating system to mount a subvolume contained in a BTRFS volume without specifying a subvolume number, it determines which of the existing subvolumes has been tagged as "default subvolume" and mounts it. If none of the exiting subvolumes has the tag "default subvolume" (e.g. because the default subvolume has been deleted), the mount command gives up with a rather cryptic message:


<pre>
<console>
# mount /dev/loop0 /mnt
###i## mount /dev/loop0 /mnt
mount: No such file or directory
mount: No such file or directory
</pre>
</console>


It is also possible to change at any time which subvolume contained in a BTRFS volume is considered the default volume. This is accomplished with '''btrfs subvolume set-default'''. The following tags the subvolume 261 as being the default:
It is also possible to change at any time which subvolume contained in a BTRFS volume is considered the default volume. This is accomplished with '''btrfs subvolume set-default'''. The following tags the subvolume 261 as being the default:


<pre>
<console>
# btrfs subvolume set-default 261 /mnt
###i## btrfs subvolume set-default 261 /mnt
</pre>
</console>


After that operation, doing the following is exactly the same:
After that operation, doing the following is exactly the same:


<pre>
<console>
# mount /dev/loop0 /mnt
###i## mount /dev/loop0 /mnt
# mount -o subvolid=261 /dev/loop0 /mnt
###i## mount -o subvolid=261 /dev/loop0 /mnt
</pre>
</console>


{{fancynote|The chosen new default subvolume must be visible in the VFS when you invoke ''btrfs subvolume set-default''' }}
{{fancynote|The chosen new default subvolume must be visible in the VFS when you invoke ''btrfs subvolume set-default''' }}
Line 490: Line 490:
An example: considering our initial example given [[BTRFS_Fun#..._applied_to_BTRFS.21_.28or_what_is_a_volume.2Fsubvolume.29|above]] and supposing you have mounted non-named subvolume numbered 0 in /mnt, you can remove SV22 by doing:
An example: considering our initial example given [[BTRFS_Fun#..._applied_to_BTRFS.21_.28or_what_is_a_volume.2Fsubvolume.29|above]] and supposing you have mounted non-named subvolume numbered 0 in /mnt, you can remove SV22 by doing:


<pre>
<console>
# btrfs subvolume delete /mnt/SV2/SV22
###i## btrfs subvolume delete /mnt/SV2/SV22
</pre>
</console>


Obviously the BTRFS volume will look like this after the operation:
Obviously the BTRFS volume will look like this after the operation:
Line 524: Line 524:


The following illustrates how to take a snaphot of the VFS root:
The following illustrates how to take a snaphot of the VFS root:
<pre>
<console>
# btrfs subvolume snapshot / /snap-2011-05-23
###i## btrfs subvolume snapshot / /snap-2011-05-23
Create a snapshot of '/' in '//snap-2011-05-23'
Create a snapshot of '/' in '//snap-2011-05-23'
</pre>
</console>


Once created, the snapshot will persist in /snap-2011-05-23 as long as you don't delete it. Note that the snapshot contents will remain exactly the same it was at the time is was taken (as long as you don't make changes... BTRFS snapshots are writable!). A drawback of having snapshots: if you delete some files in the original filesystem, the snapshot still contains them and the disk blocks can't be claimed as free space. Remember to remove unwanted snapshots and keep a bare minimal set of them.
Once created, the snapshot will persist in /snap-2011-05-23 as long as you don't delete it. Note that the snapshot contents will remain exactly the same it was at the time is was taken (as long as you don't make changes... BTRFS snapshots are writable!). A drawback of having snapshots: if you delete some files in the original filesystem, the snapshot still contains them and the disk blocks can't be claimed as free space. Remember to remove unwanted snapshots and keep a bare minimal set of them.
Line 533: Line 533:
== Listing and deleting snaphots ==
== Listing and deleting snaphots ==


As there is no distinction between a snapshot and a subvolume snapshots are managed with the exact same commands, especially when the time has come to delete some of them. An interesting feature in BTRFS is that snapshots are writable. You can take a snapshot and make changes in the files/directories it contains.  A word of caution: there are no undo capbilities! What has been changed has been changed forever... If you need to do several tests just take several snapshots or, better yet, snapshot your snapshot then do whatever you need in this copy-of-the-copy :-).
As there is no distinction between a snapshot and a subvolume, snapshots are managed with the exact same commands, especially when the time has come to delete some of them. An interesting feature in BTRFS is that snapshots are writable. You can take a snapshot and make changes in the files/directories it contains.  A word of caution: there are no undo capbilities! What has been changed has been changed forever... If you need to do several tests just take several snapshots or, better yet, snapshot your snapshot then do whatever you need in this copy-of-the-copy :-).


== Using snapshots for system recovery (aka Back to the Future) ==
== Using snapshots for system recovery (aka Back to the Future) ==
Line 548: Line 548:
In all cases you must take a snapshot of your VFS root *before* updating the system:
In all cases you must take a snapshot of your VFS root *before* updating the system:


<pre>
<console>
# btrfs subvolume snapshot / /before-updating-2011-05-24
###i## btrfs subvolume snapshot / /before-updating-2011-05-24
Create a snapshot of '/' in '//before-updating-2011-05-24'
Create a snapshot of '/' in '//before-updating-2011-05-24'
</pre>
</console>


{{fancynote|Hint: You can create an empty file at the root of your snapshot with the name of your choice to help you easily identify which subvolume is the currently mounted one (e.g. if the snapshot has been named '''before-updating-2011-05-24''', you can use a slightly different name like '''current-is-before-updating-2011-05-24''' <nowiki>=></nowiki> '''touch /before-updating-2011-05-24/current-is-before-updating-2011-05-24'''). This is extremly useful if you are dealing with several snapshots.}}
{{fancynote|Hint: You can create an empty file at the root of your snapshot with the name of your choice to help you easily identify which subvolume is the currently mounted one (e.g. if the snapshot has been named '''before-updating-2011-05-24''', you can use a slightly different name like '''current-is-before-updating-2011-05-24''' <nowiki>=></nowiki> '''touch /before-updating-2011-05-24/current-is-before-updating-2011-05-24'''). This is extremly useful if you are dealing with several snapshots.}}
Line 565: Line 565:
First search for the newly created subvolume number:
First search for the newly created subvolume number:


<pre>
<console>
# btrfs subvolume list /  
###i## btrfs subvolume list /  
'''ID 256''' top level 5 path before-updating-2011-05-24
'''ID 256''' top level 5 path before-updating-2011-05-24
</pre>
</console>


'256' is the ID to be retained (of course, this ID will differ in your case).
'256' is the ID to be retained (of course, this ID will differ in your case).
Line 574: Line 574:
Now, change the default subvolume of the BTRFS volume to designate the subvolume (snapshot) ''before-updating'' and not the root subvolume then reboot:
Now, change the default subvolume of the BTRFS volume to designate the subvolume (snapshot) ''before-updating'' and not the root subvolume then reboot:


<pre>
<console>
# btrfs subvolume set-default 256 /
###i## btrfs subvolume set-default 256 /
</pre>
</console>


Once the system has rebooted, and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot ''before-updating-2011-05-24'':
Once the system has rebooted, and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot ''before-updating-2011-05-24'':


<pre>
<console>
# ls -l /
###i## ls -l /
...
...
-rw-rw-rw-  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
-rw-rw-rw-  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...
...
</pre>
</console>


The correct subvolume has been used for mounting the VFS!  Excellent! This is now the time to mount your "production" VFS root (remember the root subvolume can only be accessed via its identification number i.e ''0''):
The correct subvolume has been used for mounting the VFS!  Excellent! This is now the time to mount your "production" VFS root (remember the root subvolume can only be accessed via its identification number i.e ''0''):


<pre>
<console>
# mount -o subvolid=0 /mnt
###i## mount -o subvolid=0 /mnt
# mount
###i## mount
...
...
/dev/sda2 on /mnt type btrfs (rw,subvolid=0)
/dev/sda2 on /mnt type btrfs (rw,subvolid=0)
</pre>
</console>


Oh by the way, as the root subvolume is now mounted in /mnt let's try something, just for the sake of the demonstration:
Oh by the way, as the root subvolume is now mounted in <tt>/mnt</tt> let's try something, just for the sake of the demonstration:


<pre>
<console>
# ls /mnt
###i## ls /mnt
...
...
drwxr-xr-x  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
drwxr-xr-x  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...
...
# btrfs subvolume list /mnt
###i## btrfs subvolume list /mnt
ID 256 top level 5 path before-updating-2011-05-24
ID 256 top level 5 path before-updating-2011-05-24
</pre>
</console>


No doubt possible :-)
No doubt possible :-)
Time to rollback! For this '''rsync''' will be used in the following way:  
Time to rollback! For this '''rsync''' will be used in the following way:  
<pre>
<console>
# rsync --progress -aHAX --exclude=/proc --exclude=/dev --exclude=/sys --exclude=/mnt / /mnt
###i## rsync --progress -aHAX --exclude=/proc --exclude=/dev --exclude=/sys --exclude=/mnt / /mnt
</pre>
</console>


Basically we are asking rsync to:
Basically we are asking rsync to:
Line 622: Line 622:
Once finished, you will have to set the default subvolume to be the root subvolume:
Once finished, you will have to set the default subvolume to be the root subvolume:


<pre>
<console>
# btrfs subvolume set-default 0 /mnt
###i## btrfs subvolume set-default 0 /mnt
ID 256 top level 5 path before-updating-2011-05-24
ID 256 top level 5 path before-updating-2011-05-24
</pre>
</console>


{{fancywarning|'''DO NOT ENTER / instead of /mnt in the above command; it won't work and you will be under the snapshot before-updating-2011-05-24 the next time the machine reboots.'''  
{{fancywarning|'''DO NOT ENTER / instead of /mnt in the above command; it won't work and you will be under the snapshot before-updating-2011-05-24 the next time the machine reboots.'''  
Line 634: Line 634:
Now just reboot and you should be in business again! Once you have rebooted just check if you are really under the right subvolume:
Now just reboot and you should be in business again! Once you have rebooted just check if you are really under the right subvolume:


<pre>
<console>
# ls /  
###i## ls /  
...
...
drwxr-xr-x  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
drwxr-xr-x  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...
...
# btrfs subvolume list /
###i## btrfs subvolume list /
ID 256 top level 5 path before-updating-2011-05-24
ID 256 top level 5 path before-updating-2011-05-24
</pre>
</console>


At the right place? Excellent! You can now  delete the snapshot if you wish, or better: keep it as a lifeboat of "last good known system state."
At the right place? Excellent! You can now  delete the snapshot if you wish, or better: keep it as a lifeboat of "last good known system state."
Line 649: Line 649:
First search for the newly created subvolume number:
First search for the newly created subvolume number:


<pre>
<console>
# btrfs subvolume list /  
###i## btrfs subvolume list /  
'''ID 256''' top level 5 path before-updating-2011-05-24
'''ID 256''' top level 5 path before-updating-2011-05-24
</pre>
</console>


'256' is the ID to be retained (can differ in your case).  
'256' is the ID to be retained (can differ in your case).  


Now with your favourite text editor, edit the adequate kernel command line in your bootloader configuration. This file contains is typically organized in several sections (one per kernel present on the system plus some global settings), like the excerpt below:
Now with your favourite text editor, edit the adequate kernel command line in your bootloader configuration (<tt>/etc/boot.conf</tt>). This file contains is typically organized in several sections (one per kernel present on the system plus some global settings), like the excerpt below:


<pre>
<pre>
Line 725: Line 725:
Once the system has rebooted and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot ''before-updating-2011-05-24'':
Once the system has rebooted and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot ''before-updating-2011-05-24'':


<pre>
<console>
# ls -l /
###i## ls -l /
...
...
-rw-rw-rw-  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
-rw-rw-rw-  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...
...
</pre>
</console>


The correct subvolume has been used for mounting the VFS!  Excellent! This is now the time to mount your "production" VFS root (remember the root subvolume can only be accessed via its identification number 0):
The correct subvolume has been used for mounting the VFS!  Excellent! This is now the time to mount your "production" VFS root (remember the root subvolume can only be accessed via its identification number 0):


<pre>
<console>
# mount -o subvolid=0 /mnt
###i## mount -o subvolid=0 /mnt
# mount
###i## mount
...
...
/dev/sda2 on /mnt type btrfs (rw,subvolid=0)
/dev/sda2 on /mnt type btrfs (rw,subvolid=0)
</pre>
</console>


Time to rollback! For this '''rsync''' will be used in the following way:  
Time to rollback! For this '''rsync''' will be used in the following way:  
<pre>
<console>
# rsync --progress -aHAX --exclude=/proc --exclude=/dev --exclude=/sys --exclude=/mnt / /mnt
###i## rsync --progress -aHAX --exclude=/proc --exclude=/dev --exclude=/sys --exclude=/mnt / /mnt
</pre>
</console>


Here, please refer to what has been said in [[BTRFS_Fun#Way_.231:_Fiddle_with_the_default_subvolume_number|Way #1]] concerning the used options in rsync. Once everything is in place again, edit your bootloader configuration to remove the rootflags/real_rootflags kernel parameter, reboot and check if you are really under the right subvolume:
Here, please refer to what has been said in [[BTRFS_Fun#Way_.231:_Fiddle_with_the_default_subvolume_number|Way #1]] concerning the used options in rsync. Once everything is in place again, edit your bootloader configuration to remove the rootflags/real_rootflags kernel parameter, reboot and check if you are really under the right subvolume:


<pre>
<console>
# ls /  
###i## ls /  
...
...
drwxr-xr-x  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
drwxr-xr-x  1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...
...
# btrfs subvolume list /
###i## btrfs subvolume list /
ID 256 top level 5 path current-is-before-updating-2011-05-24
ID 256 top level 5 path current-is-before-updating-2011-05-24
</pre>
</console>


At the right place? Excellent! You can now  delete the snapshot if you wish, or better: keep it as a lifeboat of "last good known system state."
At the right place? Excellent! You can now  delete the snapshot if you wish, or better: keep it as a lifeboat of "last good known system state."
Line 778: Line 778:
When you prepare the disk space that will hold the root of your future Funtoo instance (and so, will hold /usr /bin /sbin /etc etc...), don't use the root subvolume but take an extra step to define a subvolume like illustrated below:
When you prepare the disk space that will hold the root of your future Funtoo instance (and so, will hold /usr /bin /sbin /etc etc...), don't use the root subvolume but take an extra step to define a subvolume like illustrated below:


<pre>
<console>
# fdisk /dev/sda2
###i## fdisk /dev/sda2
....
....
# mkfs.btrfs /dev/sda2
###i## mkfs.btrfs /dev/sda2
# mount /dev/sda2 /mnt/funtoo
###i## mount /dev/sda2 /mnt/funtoo
# subvolume create /mnt/funtoo /mnt/funtoo/live-vfs-root-20110523
###i## subvolume create /mnt/funtoo /mnt/funtoo/live-vfs-root-20110523
# chroot /mnt/funtoo/live-vfs-root-20110523 /bin/bash
###i## chroot /mnt/funtoo/live-vfs-root-20110523 /bin/bash
</pre>
</console>


Then either:
Then either:
Line 814: Line 814:
== Space recovery / defragmenting the filesystem ==
== Space recovery / defragmenting the filesystem ==


From time to time it is advised to ask for re-optimizing the filesystem structures and data blocks in a subvolume. In BTRFS terminology this is called a defragmentation and it only be performed when the subvolume is mounted in the VFS (online defragmentation):
{{Fancytip|From time to time it is advised to ask for re-optimizing the filesystem structures and data blocks in a subvolume. In BTRFS terminology this is called a defragmentation and it only be performed when the subvolume is mounted in the VFS (online defragmentation):}}


<pre>
<console>
# btrfs filesystem defrag /mnt
###i## btrfs filesystem defrag /mnt
</pre>
</console>


You can still access the subvolume, even change its contents, while a defragmentation is running.
You can still access the subvolume, even change its contents, while a defragmentation is running.
Line 836: Line 836:
If you are using '''Linux 3.2 and later (only!)''', you can use the ''recovery'' option to make BTRFS seek for a usable copy of tree root (several copies of it exists on the disk). Just mount your filesystem as:
If you are using '''Linux 3.2 and later (only!)''', you can use the ''recovery'' option to make BTRFS seek for a usable copy of tree root (several copies of it exists on the disk). Just mount your filesystem as:


<pre>
<console>
# mount -o recovery /dev/yourBTFSvolume /mount/point
###i## mount -o recovery /dev/yourBTFSvolume /mount/point
</pre>
</console>


== btrfs-select-super / btrfs-zero-log ==
== btrfs-select-super / btrfs-zero-log ==
Line 851: Line 851:
The two tools this section is about are not build by default and Funtoo ebuilds does not build them as well for the moment. So you must build them manually:
The two tools this section is about are not build by default and Funtoo ebuilds does not build them as well for the moment. So you must build them manually:


<pre>
<console>
# mkdir ~/src
###i## mkdir ~/src
# cd ~/src
###i## cd ~/src
# git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git  
###i## git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git  
# cd btrfs-progs
###i## cd btrfs-progs
# make && make btrfs-select-super && make btrfs-zero-log
###i## make && make btrfs-select-super && make btrfs-zero-log
</pre>
</console>


{{fancynote|In the past, ''btrfs-select-super'' and ''btrfs-zero-log'' were lying in the git-next branch, this is no longer the case and those tools are available in the master branch }}
{{fancynote|In the past, ''btrfs-select-super'' and ''btrfs-zero-log'' were lying in the git-next branch, this is no longer the case and those tools are available in the master branch }}
Line 865: Line 865:
In case of a corrupted superblock, start by asking btrfsck to use an alternate copy of the superblock instead of the superblock #0. This is achieved via the -s option followed by the number of the alternate copy you wish to use. In the following example we ask for using the superblock copy #2 of /dev/sda7:
In case of a corrupted superblock, start by asking btrfsck to use an alternate copy of the superblock instead of the superblock #0. This is achieved via the -s option followed by the number of the alternate copy you wish to use. In the following example we ask for using the superblock copy #2 of /dev/sda7:


<pre>
<console>
# ./btrfsck --s 2 /dev/sd7
###i## ./btrfsck --s 2 /dev/sd7
</pre>
</console>


When btrfsck is happy, use btrfs-super-select to restore the default superblock (copy #0) with a clean copy.  In the following example we ask for restoring the superblock of /dev/sda7 with its copy #2:
When btrfsck is happy, use btrfs-super-select to restore the default superblock (copy #0) with a clean copy.  In the following example we ask for restoring the superblock of /dev/sda7 with its copy #2:


<pre>
<console>
# ./btrfs-super-select -s 2  /dev/sda7
###i## ./btrfs-super-select -s 2  /dev/sda7
</pre>
</console>


Note that this will overwrite all the other supers on the disk, which means you really only get one shot at it.   
Note that this will overwrite all the other supers on the disk, which means you really only get one shot at it.   
Line 887: Line 887:
To truncate the journal of a BTRFS partition (and thereby lose any changes that only exist in the log!), just give the filesystem to process to ''btrfs-zero-log'':
To truncate the journal of a BTRFS partition (and thereby lose any changes that only exist in the log!), just give the filesystem to process to ''btrfs-zero-log'':


<pre>
<console>
# ./btrfs-zero-log /dev/sda7
###i## ./btrfs-zero-log /dev/sda7
</pre>
</console>


This is not a generic technique, and works by permanently throwing away a small amount of potentially good data.
This is not a generic technique, and works by permanently throwing away a small amount of potentially good data.
Line 899: Line 899:
If one thing is famous in the BTRFS world it would be the so-wished fully functional ''btrfsck''. A read-only version of the tool was existing out there for years, however at the begining of 2012, BTRFS developers made a public and very experimental release: the secret jewel lies in the branch ''dangerdonteveruse'' of the BTRFS Git repository hold by Chris Mason on kernel.org.
If one thing is famous in the BTRFS world it would be the so-wished fully functional ''btrfsck''. A read-only version of the tool was existing out there for years, however at the begining of 2012, BTRFS developers made a public and very experimental release: the secret jewel lies in the branch ''dangerdonteveruse'' of the BTRFS Git repository hold by Chris Mason on kernel.org.


<pre>
<console>
# git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
###i## git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
# cd btrfs-progs
###i## cd btrfs-progs
# git checkout dangerdonteveruse
###i## git checkout dangerdonteveruse
# make
###i## make
</pre>
</console>


So far the tool can:
So far the tool can:
Line 912: Line 912:
To repair:
To repair:


<pre>
<console>
# btrfsck --repair /dev/''yourBTRFSvolume''
###i## btrfsck --repair /dev/''yourBTRFSvolume''
</pre>
</console>


To wipe the CRC tree:
To wipe the CRC tree:
<pre>
<console>
# btrfsck --init-csum-tree /dev/''yourBTRFSvolume''
###i## btrfsck --init-csum-tree /dev/''yourBTRFSvolume''
</pre>
</console>


Two other options exist in the source code: ''--super'' (equivalent of btrfs-select-super ?) and ''--init-extent-tree'' (clears out any extent?)
Two other options exist in the source code: ''--super'' (equivalent of btrfs-select-super ?) and ''--init-extent-tree'' (clears out any extent?)
Line 938: Line 938:
[[Category:Featured]]
[[Category:Featured]]
[[Category:Filesystems]]
[[Category:Filesystems]]
{{ArticleFooter}}

Latest revision as of 09:41, December 28, 2014

   Support Funtoo!
Get an awesome Funtoo container and support Funtoo! See Funtoo Containers for more information.
   Important

BTRFS is still experimental even with latest Linux kernels (3.4-rc at date of writing) so be prepared to lose some data sooner or later or hit a severe issue/regressions/"itchy" bugs. Subliminal message: Do not put critical data on BTRFS partitions.


Introduction

BTRFS is an advanced filesystem mostly contributed by Sun/Oracle whose origins take place in 2007. A good summary is given in [1]. BTRFS aims to provide a modern answer for making storage more flexible and efficient. According to its main contributor, Chris Mason, the goal was "to let Linux scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what's being used and makes it more reliable." (Ref. http://en.wikipedia.org/wiki/Btrfs).

Btrfs, often compared to ZFS, is offering some interesting features like:

  • Using very few fixed location metadata, thus allowing an existing ext2/ext3 filesystem to be "upgraded" in-place to BTRFS.
  • Operations are transactional
  • Online volume defragmentation (online filesystem check is on the radar but is not yet implemented).
  • Built-in storage pool capabilities (no need for LVM)
  • Built-in RAID capabilities (both for the data and filesystem metadata). RAID-5/6 is planned for 3.5 kernels
  • Capabilities to grow/shrink the volume
  • Subvolumes and snapshots (extremely powerful, you can "rollback" to a previous filesystem state as if nothing had happened).
  • Copy-On-Write
  • Usage of B-Trees to store the internal filesystem structures (B-Trees are known to have a logarithmic growth in depth, thus making them more efficient when scanning)

Requirements

A recent Linux kernel (BTRFS metadata format evolves from time to time and mounting using a recent Linux kernel can make the BTRFS volume unreadable with an older kernel revision, e.g. Linux 2.6.31 vs Linux 2.6.30). You must also use sys-fs/btrfs-progs (0.19 or better use -9999 which points to the git repository).

Playing with BTRFS storage pool capabilities

Whereas it would possible to use btrfs just as you are used to under a non-LVM system, it shines in using its built-in storage pool capabilities. Tired of playing with LVM ? :-) Good news: you do not need it anymore with btrfs.

Setting up a storage pool

BTRFS terminology is a bit confusing. If you already have used another 'advanced' filesystem like ZFS or some mechanism like LVM, it's good to know that there are many correlations. In the BTRFS world, the word volume corresponds to a storage pool (ZFS) or a volume group (LVM). Ref. http://www.rkeene.org/projects/info/wiki.cgi/165

The test bench uses disk images through loopback devices. Of course, in a real world case, you will use local drives or units though a SAN. To start with, 5 devices of 1 GiB are allocated:

root # dd if=/dev/zero of=/tmp/btrfs-vol0.img bs=1G count=1
root # dd if=/dev/zero of=/tmp/btrfs-vol1.img bs=1G count=1
root # dd if=/dev/zero of=/tmp/btrfs-vol2.img bs=1G count=1
root # dd if=/dev/zero of=/tmp/btrfs-vol3.img bs=1G count=1
root # dd if=/dev/zero of=/tmp/btrfs-vol4.img bs=1G count=1

Then attached:

root # losetup /dev/loop0 /tmp/btrfs-vol0.img
root # losetup /dev/loop1 /tmp/btrfs-vol1.img
root # losetup /dev/loop2 /tmp/btrfs-vol2.img
root # losetup /dev/loop3 /tmp/btrfs-vol3.img
root # losetup /dev/loop4 /tmp/btrfs-vol4.img

Creating the initial volume (pool)

BTRFS uses different strategies to store data and for the filesystem metadata (ref. https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices).

By default the behavior is:

  • metadata is replicated on all of the devices. If a single device is used the metadata is duplicated inside this single device (useful in case of corruption or bad sector, there is a higher chance that one of the two copies is clean). To tell btrfs to maintain a single copy of the metadata, just use single. Remember: dead metadata = dead volume with no chance of recovery.
  • data is spread amongst all of the devices (this means no redundancy; any data block left on a defective device will be inaccessible)

To create a BTRFS volume made of multiple devices with default options, use:

root # mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2 

To create a BTRFS volume made of a single device with a single copy of the metadata (dangerous!), use:

root # mkfs.btrfs -m single /dev/loop0

To create a BTRFS volume made of multiple devices with metadata spread amongst all of the devices, use:

root # mkfs.btrfs -m raid0 /dev/loop0 /dev/loop1 /dev/loop2 

To create a BTRFS volume made of multiple devices, with metadata spread amongst all of the devices and data mirrored on all of the devices (you probably don't want this in a real setup), use:

root # mkfs.btrfs -m raid0 -d raid1 /dev/loop0 /dev/loop1 /dev/loop2 

To create a fully redundant BTRFS volume (data and metadata mirrored amongst all of the devices), use:

root # mkfs.btrfs -d raid1 /dev/loop0 /dev/loop1 /dev/loop2 
   Note

Technically you can use anything as a physical volume: you can have a volume composed of 2 local hard drives, 3 USB keys, 1 loopback device pointing to a file on a NFS share and 3 logical devices accessed through your SAN (you would be an idiot, but you can, nevertheless). Having different physical volume sizes would lead to issues, but it works :-).

Checking the initial volume

To verify the devices of which BTRFS volume is composed, just use btrfs-show device (old style) or btrfs filesystem show device (new style). You need to specify one of the devices (the metadata has been designed to keep a track of the what device is linked what other device). If the initial volume was set up like this:

root # mkfs.btrfs /dev/loop0 /dev/loop1 /dev/loop2

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

adding device /dev/loop1 id 2
adding device /dev/loop2 id 3
fs created label (null) on /dev/loop0
        nodesize 4096 leafsize 4096 sectorsize 4096 size 3.00GB
Btrfs Btrfs v0.19

It can be checked with one of these commands (They are equivalent):

root # btrfs filesystem show /dev/loop0
root # btrfs filesystem show /dev/loop1
root # btrfs filesystem show /dev/loop2

The result is the same for all commands:

Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
        Total devices 3 FS bytes used 28.00KB
        devid    3 size 1.00GB used 263.94MB path /dev/loop2
        devid    1 size 1.00GB used 275.94MB path /dev/loop0
        devid    2 size 1.00GB used 110.38MB path /dev/loop1

To show all of the volumes that are present:

root # btrfs filesystem show
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
        Total devices 3 FS bytes used 28.00KB
        devid    3 size 1.00GB used 263.94MB path /dev/loop2
        devid    1 size 1.00GB used 275.94MB path /dev/loop0
        devid    2 size 1.00GB used 110.38MB path /dev/loop1

Label: none  uuid: 1701af39-8ea3-4463-8a77-ec75c59e716a
        Total devices 1 FS bytes used 944.40GB
        devid    1 size 1.42TB used 1.04TB path /dev/sda2

Label: none  uuid: 01178c43-7392-425e-8acf-3ed16ab48813
        Total devices 1 FS bytes used 180.14GB
        devid    1 size 406.02GB used 338.54GB path /dev/sda4
   Warning

BTRFS wiki mentions that btrfs device scan should be performed, consequence of not doing the incantation? Volume not seen?

Mounting the initial volume

BTRFS volumes can be mounted like any other filesystem. The cool stuff at the top on the sundae is that the design of the BTRFS metadata makes it possible to use any of the volume devices. The following commands are equivalent:

root # mount /dev/loop0 /mnt
root # mount /dev/loop1 /mnt
root # mount /dev/loop2 /mnt

For every physical device used for mounting the BTRFS volume df -h reports the same (in all cases 3 GiB of "free" space is reported):

root # df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop1      3.0G   56K  1.8G   1% /mnt

The following command prints very useful information (like how the BTRFS volume has been created):

root # btrfs filesystem df /mnt      
Data, RAID0: total=409.50MB, used=0.00
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=204.75MB, used=28.00KB
Metadata: total=8.00MB, used=0.00

By the way, as you can see, for the btrfs command the mount point should be specified, not one of the physical devices.

Shrinking the volume

A common practice in system administration is to leave some head space, instead of using the whole capacity of a storage pool (just in case). With btrfs one can easily shrink volumes. Let's shrink the volume a bit (about 25%):

root # btrfs filesystem resize -500m /mnt
root # dh -h
/dev/loop1      2.6G   56K  1.8G   1% /mnt

And yes, it is an on-line resize, there is no need to umount/shrink/mount. So no downtimes! :-) However, a BTRFS volume requires a minimal size... if the shrink is too aggressive the volume won't be resized:

root # btrfs filesystem resize -1g /mnt  
Resize '/mnt' of '-1g'
ERROR: unable to resize '/mnt'

Growing the volume

This is the opposite operation, you can make a BTRFS grow to reach a particular size (e.g. 150 more megabytes):

root # btrfs filesystem resize +150m /mnt
Resize '/mnt' of '+150m'
root # dh -h
/dev/loop1      2.7G   56K  1.8G   1% /mnt

You can also take an "all you can eat" approach via the max option, meaning all of the possible space will be used for the volume:

root # btrfs filesystem resize max /mnt
Resize '/mnt' of 'max'
root # dh -h
/dev/loop1      3.0G   56K  1.8G   1% /mnt

Adding a new device to the BTRFS volume

To add a new device to the volume:

root # btrfs device add /dev/loop4 /mnt 
oxygen ~ # btrfs filesystem show /dev/loop4 
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
        Total devices 4 FS bytes used 28.00KB
        devid    3 size 1.00GB used 263.94MB path /dev/loop2
        devid    4 size 1.00GB used 0.00 path /dev/loop4
        devid    1 size 1.00GB used 275.94MB path /dev/loop0
        devid    2 size 1.00GB used 110.38MB path /dev/loop1 

Again, no need to umount the volume first as adding a device is an on-line operation (the device has no space used yet hence the '0.00'). The operation is not finished as we must tell btrfs to prepare the new device (i.e. rebalance/mirror the metadata and the data between all devices):

root # btrfs filesystem balance /mnt
root # btrfs filesystem show /dev/loop4
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
        Total devices 4 FS bytes used 28.00KB
        devid    3 size 1.00GB used 110.38MB path /dev/loop2
        devid    4 size 1.00GB used 366.38MB path /dev/loop4
        devid    1 size 1.00GB used 378.38MB path /dev/loop0
        devid    2 size 1.00GB used 110.38MB path /dev/loop1
   Note

Depending on the sizes and what is in the volume a balancing operation could take several minutes or hours.

Removing a device from the BTRFS volume

root # btrfs device delete /dev/loop2 /mnt
root # btrfs filesystem show /dev/loop0   
Label: none  uuid: 0a774d9c-b250-420e-9484-b8f982818c09
        Total devices 4 FS bytes used 28.00KB
        devid    4 size 1.00GB used 264.00MB path /dev/loop4
        devid    1 size 1.00GB used 268.00MB path /dev/loop0
        devid    2 size 1.00GB used 0.00 path /dev/loop1
        *** Some devices missing
root # df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop1      3.0G   56K  1.5G   1% /mnt

Here again, removing a device is totally dynamic and can be done as an on-line operation! Note that when a device is removed, its content is transparently redistributed among the other devices.

Obvious points:

  • ** DO NOT UNPLUG THE DEVICE BEFORE THE END OF THE OPERATION, DATA LOSS WILL RESULT**
  • If you have used raid0 in either metadata or data at the BTRFS volume creation you will end in a unusable volume if one of the the devices fails before being properly removed from the volume as some stripes will be lost.

Once you add a new device to the BTRFS volume as a replacement for a removed one, you can cleanup the references to the missing device:

root # btrfs device delete missing

Using a BTRFS volume in degraded mode

   Warning

It is not possible to use a volume in degraded mode if raid0 has been used for data/metadata and the device had not been properly removed with btrfs device delete (some stripes will be missing). The situation is even worse if RAID0 is used for the the metadata: trying to mount a BTRFS volume in read/write mode while not all the devices are accessible will simply kill the remaining metadata, hence making the BTRFS volume totally unusable... you have been warned! :-)

If you use raid1 or raid10 for data AND metadata and you have a usable submirror accessible (consisting of 1 drive in case of RAID1 or the two drive of the same RAID0 array in case of RAID10), you can mount the array in degraded mode in the case of some devices are missing (e.g. dead SAN link or dead drive) :

root # mount -o degraded /dev/loop0 /mnt

If you use RAID0 (and have one of your drives inaccessible) the metadata or RAID10 but not enough drives are on-line to even get a degraded mode possible, btrfs will refuse to mount the volume:

root # mount /dev/loop0 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

The situation is no better if you have used RAID1 for the metadata and RAID0 for the data, you can mount the drive in degraded mode but you will encounter problems while accessing your files:

root # cp /mnt/test.dat /tmp 
cp: reading `/mnt/test.dat': Input/output error
cp: failed to extend `/tmp/test.dat': Input/output error

Playing with subvolumes and snapshots

A story of boxes....

When you think about subvolumes in BTRFS, think about boxes. Each one of those can contain items and other smaller boxes ("sub-boxes") which in turn can also contains items and boxes (sub-sub-boxes) and so on. Each box and items has a number and a name, except for the top level box, which has only a number (zero). Now imagine that all of the boxes are semi-opaque: you can see what they contain if you are outside the box but you can't see outside when you are inside the box. Thus, depending on the box you are in you can view either all of the items and sub-boxes (top level box) or only a part of them (any other box but the top level one). To give you a better idea of this somewhat abstract explanation let's illustrate a bit:

(0) --+-> Item A (1)
      |
      +-> Item B (2)
      |
      +-> Sub-box 1 (3) --+-> Item C (4)
      |                   |
      |                   +-> Sub-sub-box 1.1 (5) --+-> Item D (6)
      |                   |                         | 
      |                   |                         +-> Item E (7)
      |                   |                         |
      |                   |                         +-> Sub-Sub-sub-box 1.1.1 (8) ---> Item F (9)
      |                   +-> Item F (10)
      |
      +-> Sub-box 2 (11) --> Item G (12)                    

What you see in the hierarchy depends on where you are (note that the top level box numbered 0 doesn't have a name, you will see why later). So:

  • If you are in the node named top box (numbered 0) you see everything, i.e. things numbered 1 to 12
  • If you are in "Sub-sub-box 1.1" (numbered 5), you see only things 6 to 9
  • If you are in "Sub-box 2" (numbered 11), you only see what is numbered 12

Did you notice? We have two items named 'F' (respectively numbered 9 and 10). This is not a typographic error, this is just to illustrate the fact that every item lives its own peaceful existence in its own box. Although they have the same name, 9 and 10 are two distinct and unrelated objects (of course it is impossible to have two objects named 'F' in the same box, even they would be numbered differently).

... applied to BTRFS! (or, "What is a volume/subvolume?")

BTRFS subvolumes work in the exact same manner, with some nuances:

  • First, imagine a frame that surrounds the whole hierarchy (represented in dots below). This is your BTRFS volume. A bit abstract at first glance, but BTRFS volumes have no tangible existence, they are just an aggregation of devices tagged as being clustered together (that fellowship is created when you invoke mkfs.btrfs or btrfs device add).
  • Second, the first level of hierarchy contains only a single box numbered zero which can never be destroyed (because everything it contains would also be destroyed).

If in our analogy of a nested boxes structure we used the word "box", in the real BTRFS word we use the word "subvolume" (box => subvolume). Like in our boxes analogy, all subvolumes hold a unique number greater than zero and a name, with the exception of root subvolume located at the very first level of the hierarchy which is always numbered zero and has no name (BTRFS tools destroy subvolumes by their name not their number so no name = no possible destruction. This is a totally intentional architectural choice, not a flaw).

Here is a typical hierarchy:

.....BTRFS Volume................................................................................................................................
.
.  Root subvolume (0) --+-> Subvolume SV1 (258) ---> Directory D1 --+-> File F1
.                       |                                           |
.                       |                                           +-> File F2
.                       |
.                       +-> Directory D1 --+-> File F1
.                       |                  |
.                       |                  +-> File F2
.                       |                  |
.                       |                  +-> File F3
.                       |                  |
.                       |                  +-> Directory D11 ---> File F4
.                       +-> File F1
.                       |
.                       +-> Subvolume SV2 (259) --+-> Subvolume SV21 (260)
.                                                 |
.                                                 +-> Subvolume SV22 (261) --+-> Directory D2 ---> File F4
.                                                                            |
.                                                                            +-> Directory D3 --+-> Subvolume SV221 (262) ---> File F5
.                                                                            |                  |
.                                                                            |                  +-> File F6
.                                                                            |                  |
.                                                                            |                  +-> File F7
.                                                                            |
.                                                                            +-> File F8
.
.....................................................................................................................................

Maybe you have a question: "Okay, What is the difference between a directory and a subvolume? Both can can contain something!". To further confuse you, here is what users get if they reproduce the first level hierarchy on a real machine:

root # ls -l
total 0
drwx------ 1 root root 0 May 23 12:48 SV1
drwxr-xr-x 1 root root 0 May 23 12:48 D1
-rw-r--r-- 1 root root 0 May 23 12:48 F1
drwx------ 1 root root 0 May 23 12:48 SV2

Although subvolumes SV1 and SV2 have been created with special BTRFS commands they appear just as if they were ordinary directories! A subtle nuance exists, however: think again at our boxes analogy we did before and map the following concepts in the following manner:

  • a subvolume : the semi-opaque box
  • a directory : a sort of item (that can contain something even another subvolume)
  • a file : another sort of item

So, in the internal filesystem metadata SV1 and SV2 are stored in a different manner than D1 (although this is transparently handled for users). You can, however see SV1 and SV2 for what they are (subvolumes) by running the following command (subvolume numbered (0) has been mounted on /mnt):

root # btrfs subvolume list /mnt
ID 258 top level 5 path SV1
ID 259 top level 5 path SV2

What would we get if we create SV21 and SV22 inside of SV2? Let's try! Before going further you should be aware that a subvolume is created by invoking the magic command btrfs subvolume create:

root # cd /mnt/SV2
root # btrfs subvolume create SV21
Create subvolume './SV21'
root # btrfs subvolume create SV22
Create subvolume './SV22'
root # btrfs subvolume list /mnt  
ID 258 top level 5 path SV1
ID 259 top level 5 path SV2
ID 260 top level 5 path SV2/SV21
ID 261 top level 5 path SV2/SV22

Again, invoking ls in /mnt/SV2 will report the subvolumes as being directories:

root # ls -l
total 0
drwx------ 1 root root 0 May 23 13:15 SV21
drwx------ 1 root root 0 May 23 13:15 SV22

Changing the point of view on the subvolumes hierarchy

At some point in our boxes analogy we have talked about what we see and what we don't see depending on our location in the hierarchy. Here lies a big important point: whereas most of the BTRFS users mount the root subvolume (subvolume id = 0, we will retain the root subvolume terminology) in their VFS hierarchy thus making visible the whole hierarchy contained in the BTRFS volume, it is absolutely possible to mount only a subset of it. How that could be possible? Simple: Just specify the subvolume number when you invoke mount. For example, to mount the hierarchy in the VFS starting at subvolume SV22 (261) do the following:

root # mount -o subvolid=261 /dev/loop0 /mnt

Here lies an important notion not disclosed in the previous paragraph: although both directories and subvolumes can act as containers, only subvolumes can be mounted in a VFS hierarchy. It is a fundamental aspect to remember: you cannot mount a sub-part of a subvolume in the VFS; you can only mount the subvolume in itself. Considering the hierarchy schema in the previous section, if you want to access the directory D3 you have three possibilities:

  1. Mount the non-named subvolume (numbered 0) and access D3 through /mnt/SV2/SV22/D3 if the non-named subvolume is mounted in /mnt
  2. Mount the subvolume SV2 (numbered 259) and access D3 through /mnt/SV22/D3 if the the subvolume SV2 is mounted in /mnt
  3. Mount the subvolume SV22 (numbered 261) and access D3 through /mnt/D3 if the the subvolume SV22 is mounted in /mnt

This is accomplished by the following commands, respectively:

root # mount -o subvolid=0 /dev/loop0 /mnt
root # mount -o subvolid=259 /dev/loop0 /mnt
root # mount -o subvolid=261 /dev/loop0 /mnt
   Note

When a subvolume is mounted in the VFS, everything located "above" the subvolume is hidden. Concretely, if you mount the subvolume numbered 261 in /mnt, you only see what is under SV22, you won't see what is located above SV22 like SV21, SV2, D1, SV1, etc.

The default subvolume

$100 questions: 1. "If I don't put 'subvolid' in the command line, 1. how does the kernel know which one of the subvolumes it has to mount? 2. Does Omitting the 'subvolid' means automatically 'mount subvolume numbered 0'?". Answers: 1. BTRFS magic! ;-) 2. No, not necessarily, you can choose something other than the non-named subvolume.

When you create a brand new BTRFS filesystem, the system not only creates the initial the root subvolume (numbered 0) but also tags it as being the default subvolume. When you ask the operating system to mount a subvolume contained in a BTRFS volume without specifying a subvolume number, it determines which of the existing subvolumes has been tagged as "default subvolume" and mounts it. If none of the exiting subvolumes has the tag "default subvolume" (e.g. because the default subvolume has been deleted), the mount command gives up with a rather cryptic message:

root # mount /dev/loop0 /mnt
mount: No such file or directory

It is also possible to change at any time which subvolume contained in a BTRFS volume is considered the default volume. This is accomplished with btrfs subvolume set-default. The following tags the subvolume 261 as being the default:

root # btrfs subvolume set-default 261 /mnt

After that operation, doing the following is exactly the same:

root # mount /dev/loop0 /mnt
root # mount -o subvolid=261 /dev/loop0 /mnt
   Note

The chosen new default subvolume must be visible in the VFS when you invoke btrfs subvolume set-default'

Deleting subvolumes

Question: "As subvolumes appear like directories, can I delete a subvolume by doing an rm -rf on it?". Answer: Yes, you can, but that way is not the most elegant, especially when it contains several gigabytes of data scattered on thousands of files, directories and maybe other subvolumes located in the one you want to remove. It isn't elegant because rm -rf could take several minutes (or even hours!) to complete whereas something else can do the same job in the fraction of a second.

"Huh?" Yes perfectly possible, and here is the cool goodie for the readers who arrived at this point: when you want to remove a subvolume, use btrfs subvolume delete instead of rm -rf. That btrfs command will remove the snapshots in a fraction of a second, even it contains several gigabytes of data!

   Warning
  • You can never remove the root subvolume of a BTRFS volume as btrfs delete expects a subvolume name (again: this is not a flaw in the design of BTRFS; removing the subvolume numbered 0 would destroy the entirety of a BTRFS volume...too dangerous).
  • If the subvolume you delete was tagged as the default subvolume you will have to designate another default subvolume or explicitly tell the system which one of the subvolumes has to be mounted)

An example: considering our initial example given above and supposing you have mounted non-named subvolume numbered 0 in /mnt, you can remove SV22 by doing:

root # btrfs subvolume delete /mnt/SV2/SV22

Obviously the BTRFS volume will look like this after the operation:

.....BTRFS Volume................................................................................................................................
.
.  (0) --+-> Subvolume SV1 (258) ---> Directory D1 --+-> File F1
.        |                                           |
.        |                                           +-> File F2
.        |
.        +-> Directory D1 --+-> File F1
.        |                  |
.        |                  +-> File F2
.        |                  |
.        |                  +-> File F3
.        |                  |
.        |                  +-> Directory D11 ---> File F4
.        +-> File F1
.        |
.        +-> Subvolume SV2 (259) --+-> Subvolume SV21 (260)
.....................................................................................................................................

Snapshot and subvolumes

If you have a good comprehension of what a subvolume is, understanding what a snapshot is won't be a problem: a snapshot is a subvolume with some initial contents. "Some initial contents" here means an exact copy.

When you think about snapshots, think about copy-on-write: the data blocks are not duplicated between a mounted subvolume and its snapshot unless you start to make changes to the files (a snapshot can occupy nearly zero extra space on the disk). At time goes on, more and more data blocks will be changed, thus making snapshots "occupy" more and more space on the disk. It is therefore recommended to keep only a minimal set of them and remove unnecessary ones to avoid wasting space on the volume.


The following illustrates how to take a snaphot of the VFS root:

root # btrfs subvolume snapshot / /snap-2011-05-23
Create a snapshot of '/' in '//snap-2011-05-23'

Once created, the snapshot will persist in /snap-2011-05-23 as long as you don't delete it. Note that the snapshot contents will remain exactly the same it was at the time is was taken (as long as you don't make changes... BTRFS snapshots are writable!). A drawback of having snapshots: if you delete some files in the original filesystem, the snapshot still contains them and the disk blocks can't be claimed as free space. Remember to remove unwanted snapshots and keep a bare minimal set of them.

Listing and deleting snaphots

As there is no distinction between a snapshot and a subvolume, snapshots are managed with the exact same commands, especially when the time has come to delete some of them. An interesting feature in BTRFS is that snapshots are writable. You can take a snapshot and make changes in the files/directories it contains. A word of caution: there are no undo capbilities! What has been changed has been changed forever... If you need to do several tests just take several snapshots or, better yet, snapshot your snapshot then do whatever you need in this copy-of-the-copy :-).

Using snapshots for system recovery (aka Back to the Future)

Here is where BTRFS can literally be a lifeboat. Suppose you want to apply some updates via emerge -uaDN @world but you want to be sure that you can jump back into the past in case something goes seriously wrong after the system update (does libpng14 remind you of anything?!). Here is the "putting-things-together part" of the article!

The following only applies if your VFS root and system directories containing /sbin, /bin, /usr, /etc.... are located on a BTRFS volume. To make things simple, the whole structure is supposed to be located in the SAME subvolume of the same BTRFS volume.

To jump back into the past you have at least two options:

  1. Fiddle with the default subvolume numbers
  2. Use the kernel command line parameters in the bootloader configuration files

In all cases you must take a snapshot of your VFS root *before* updating the system:

root # btrfs subvolume snapshot / /before-updating-2011-05-24
Create a snapshot of '/' in '//before-updating-2011-05-24'
   Note

Hint: You can create an empty file at the root of your snapshot with the name of your choice to help you easily identify which subvolume is the currently mounted one (e.g. if the snapshot has been named before-updating-2011-05-24, you can use a slightly different name like current-is-before-updating-2011-05-24 => touch /before-updating-2011-05-24/current-is-before-updating-2011-05-24). This is extremly useful if you are dealing with several snapshots.

There is no "better" way; it's just a question of personal preference.

Way #1: Fiddle with the default subvolume number

Hypothesis:

  • Your "production" VFS root partition resides in the root subvolume (subvolid=0),
  • Your /boot partition (where the bootloader configuration files are stored) is on another standalone partition

First search for the newly created subvolume number:

root # btrfs subvolume list / 
'''ID 256''' top level 5 path before-updating-2011-05-24

'256' is the ID to be retained (of course, this ID will differ in your case).

Now, change the default subvolume of the BTRFS volume to designate the subvolume (snapshot) before-updating and not the root subvolume then reboot:

root # btrfs subvolume set-default 256 /

Once the system has rebooted, and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot before-updating-2011-05-24:

root # ls -l /
...
-rw-rw-rw-   1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...

The correct subvolume has been used for mounting the VFS! Excellent! This is now the time to mount your "production" VFS root (remember the root subvolume can only be accessed via its identification number i.e 0):

root # mount -o subvolid=0 /mnt
root # mount
...
/dev/sda2 on /mnt type btrfs (rw,subvolid=0)

Oh by the way, as the root subvolume is now mounted in /mnt let's try something, just for the sake of the demonstration:

root # ls /mnt
...
drwxr-xr-x   1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...
root # btrfs subvolume list /mnt
ID 256 top level 5 path before-updating-2011-05-24

No doubt possible :-) Time to rollback! For this rsync will be used in the following way:

root # rsync --progress -aHAX --exclude=/proc --exclude=/dev --exclude=/sys --exclude=/mnt / /mnt

Basically we are asking rsync to:

  • preserve timestamps, hard and symbolic links, owner/group IDs, ACLs and any extended attributes (refer to the rsync manual page for further details on options used) and to report its progression
  • ignore the mount points where virtual filesystems are mounted (procfs, sysfs...)
  • avoid a re-recursion by reprocessing /mnt (you can speed up the process by adding some extra directories if you are sure they don't hold any important changes or any change at all like /var/tmp/portage for example).

Be patient! The resync may take several minutes or hours depending on the amount of data amount to process...

Once finished, you will have to set the default subvolume to be the root subvolume:

root # btrfs subvolume set-default 0 /mnt
ID 256 top level 5 path before-updating-2011-05-24
   Warning

DO NOT ENTER / instead of /mnt in the above command; it won't work and you will be under the snapshot before-updating-2011-05-24 the next time the machine reboots.

The reason is that subvolume number must be "visible" from the path given at the end of the btrfs subvolume set-default command line. Again refer the boxes analogy: in our context we are in a subbox numbered 256 which is located *inside* the box numbered 0 (so it can't see neither interfere with it). [TODO: better explain]

Now just reboot and you should be in business again! Once you have rebooted just check if you are really under the right subvolume:

root # ls / 
...
drwxr-xr-x   1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...
root # btrfs subvolume list /
ID 256 top level 5 path before-updating-2011-05-24

At the right place? Excellent! You can now delete the snapshot if you wish, or better: keep it as a lifeboat of "last good known system state."

Way #2: Change the kernel command line in the bootloader configuration files

First search for the newly created subvolume number:

root # btrfs subvolume list / 
'''ID 256''' top level 5 path before-updating-2011-05-24

'256' is the ID to be retained (can differ in your case).

Now with your favourite text editor, edit the adequate kernel command line in your bootloader configuration (/etc/boot.conf). This file contains is typically organized in several sections (one per kernel present on the system plus some global settings), like the excerpt below:

set timeout=5
set default=0

# Production kernel
menuentry "Funtoo Linux production kernel (2.6.39-gentoo x86/64)" {
   insmod part_msdos
   insmod ext2
   ...
   set root=(hd0,1)
   linux /kernel-x86_64-2.6.39-gentoo root=/dev/sda2 
   initrd /initramfs-x86_64-2.6.39-gentoo
}
...

Find the correct kernel line and add one of the following statements after root=/dev/sdX:

rootflags=subvol=before-updating-2011-05-24
   - Or -
rootflags=subvolid=256
   Warning

If the kernel your want to use has been generated with Genkernel, you MUST use real_rootflags=subvol=... instead of rootflags=subvol=... at the penalty of not having your rootflags taken into consideration by the kernel on reboot.


Applied to the previous example you will get the following if you referred the subvolume by its name:

set timeout=5
set default=0

# Production kernel
menuentry "Funtoo Linux production kernel (2.6.39-gentoo x86/64)" {
   insmod part_msdos
   insmod ext2
   ...
   set root=(hd0,1)
   linux /kernel-x86_64-2.6.39-gentoo root=/dev/sda2 rootflags=subvol=before-updating-2011-05-24
   initrd /initramfs-x86_64-2.6.39-gentoo
}
...

Or you will get the following if you referred the subvolume by its identification number:

set timeout=5
set default=0

# Production kernel
menuentry "Funtoo Linux production kernel (2.6.39-gentoo x86/64)" {
   insmod part_msdos
   insmod ext2
   ...
   set root=(hd0,1)
   linux /kernel-x86_64-2.6.39-gentoo root=/dev/sda2 rootflags=subvolid=256
   initrd /initramfs-x86_64-2.6.39-gentoo
}
...

Once the modifications are done, save your changes and take the necessary extra steps to commit the configuration changes on the first sectors of the disk if needed (this mostly applies to the users of LILO; Grub and SILO do not need to be refreshed) and reboot.

Once the system has rebooted and if you followed the advice in the previous paragraph that suggests to create an empty file of the same name as the snapshot, you should be able to see if the mounted VFS root is the copy hold by the snapshot before-updating-2011-05-24:

root # ls -l /
...
-rw-rw-rw-   1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...

The correct subvolume has been used for mounting the VFS! Excellent! This is now the time to mount your "production" VFS root (remember the root subvolume can only be accessed via its identification number 0):

root # mount -o subvolid=0 /mnt
root # mount
...
/dev/sda2 on /mnt type btrfs (rw,subvolid=0)

Time to rollback! For this rsync will be used in the following way:

root # rsync --progress -aHAX --exclude=/proc --exclude=/dev --exclude=/sys --exclude=/mnt / /mnt

Here, please refer to what has been said in Way #1 concerning the used options in rsync. Once everything is in place again, edit your bootloader configuration to remove the rootflags/real_rootflags kernel parameter, reboot and check if you are really under the right subvolume:

root # ls / 
...
drwxr-xr-x   1 root root    0 May 24 20:33 current-is-before-updating-2011-05-24
...
root # btrfs subvolume list /
ID 256 top level 5 path current-is-before-updating-2011-05-24

At the right place? Excellent! You can now delete the snapshot if you wish, or better: keep it as a lifeboat of "last good known system state."

Some BTRFS practices / returns of experience / gotchas

  • Although BTRFS is still evolving, at the date of writing it (still) is an experimental filesystem and should be not be used for production systems and for storing critical data (even if the data is non critical, having backups on a partition formatted with a "stable" filesystem like Reiser or ext3/4 is recommended).
  • From time to time some changes are brought to the metadata (BTRFS format is not definitive at date of writing) and a BTRFS partition could not be used with older Linux kernels (this happened with Linux 2.6.31).
  • More and more Linux distributions are proposing the filesystem as an alternative for ext4
  • Some reported gotchas: https://btrfs.wiki.kernel.org/index.php/Gotchas
  • Playing around with BTFRS can be a bit tricky especially when dealing with default volumes and mount point (again: the box analogy)
  • Using compression (e.g. LZO =>> mount -o compress=lzo) on the filesystem can improve the throughput performance, however many files nowadays are already compressed at application level (music, pictures, videos....).
  • Using space caching capabilities (mount -o space_cache) seems to brings some extra slight performance improvements.
  • There is very interesting discussion on BTRFS design limitations with B-Trees lying on LKML. We strongly encourage you to read about on

Deploying a Funtoo instance in a subvolume other than the root subvolume

Some Funtoo core devs have used BTRFS for many months and no major glitches have been reported so far (except an non-aligned memory access trap on SPARC64 in a checksum calculation routine; minor latest kernels may brought a correction) except a long time ago but this was more related to a kernel crash due to a bug that corrupted some internal data rather than the filesystem code in itself.

The following can simplify your life in case of recovery (not tested):

When you prepare the disk space that will hold the root of your future Funtoo instance (and so, will hold /usr /bin /sbin /etc etc...), don't use the root subvolume but take an extra step to define a subvolume like illustrated below:

root # fdisk /dev/sda2
....
root # mkfs.btrfs /dev/sda2
root # mount /dev/sda2 /mnt/funtoo
root # subvolume create /mnt/funtoo /mnt/funtoo/live-vfs-root-20110523
root # chroot /mnt/funtoo/live-vfs-root-20110523 /bin/bash

Then either:

  • Set the default subvolume /live-vfs-root-20110523 as being the default subvolume (btrfs subvolume set-default.... remember to inspect the subvolume identification number)
  • Use rootflag / real_rootfsflags (always use real_rootfsflags for kernel generated with Genkernel) on the kernel command line in your bootloader configuration file

Technically speaking, it won't change your life BUT at system recovery: when you want to rollback to a functional VFS root copy because something happened (buggy system package, too aggressive cleanup that removed Python, dead compiling toolchain...) you can avoid a time costly rsync but at the cost of putting a bit of overhead over your shoulders when taking a snapshot.

Here again you have two ways to recover the system:

  • fiddling with the default subvolume:
    • Mount to the no named volume somewhere (e.g. mount -o subvolid=0 /dev/sdX /mnt)
    • Take a snapshot (remember to check its identification number) of your current subvolume and store it under the root volume you just have just mounted (btrfs snapshot create / /mnt/before-updating-20110524) -- (Where is the "frontier"? If 0 is monted does its contennts also appear in the taken snashot located on the same volume?)
    • Update your system or do whatever else "dangerous" operation
    • If you need to return to the latest good known system state, just set the default subvolume to use to the just taken snapshot (btrfs subvolume set-default <snapshotnumber here> /mnt)
    • Reboot
    • Once you have rebooted, just mount the root subvolume again and delete the subvolume that correspond to the failed system update (btrfs subvolume delete /mnt/<buggy VFS rootsnapshot name here>)
  • fiddling with the kernel command line:
    • Mount to the no named volume somewhere (e.g. mount -o subvolid=0 /dev/sdX /mnt)
    • Take a snapshot (remember to check its identification number) of your current subvolume and store it under the root volume you just have just mounted (btrfs snapshot create / /mnt/before-updating-20110524) -- (Where is the "frontier"? If 0 is mounted does its contents also appear in the taken snapshot located on the same volume?)
    • Update your system or do whatever else "dangerous" operation
    • If you need to return to the latest good known system state, just set the rootflags/real_rootflags as demonstrated in previous paragraphs in your loader configuration file
    • Reboot
    • Once you have rebooted, just mount the root subvolume again and delete the subvolume that correspond to the failed system update (btrfs subvolume delete /mnt/<buggy VFS rootsnapshot name here>)

Space recovery / defragmenting the filesystem

   Tip

From time to time it is advised to ask for re-optimizing the filesystem structures and data blocks in a subvolume. In BTRFS terminology this is called a defragmentation and it only be performed when the subvolume is mounted in the VFS (online defragmentation):

root # btrfs filesystem defrag /mnt

You can still access the subvolume, even change its contents, while a defragmentation is running.

It is also a good idea to remove the snapshots you don't use anymore especially if huge files and/or lots of files are changed because snapshots will still hold some blocks that could be reused.

SSE 4.2 boost

If your CPU supports hardware calculation of CRC32 (e.g. since Intel Nehalem series and later and AMD Bulldozer series), you are encouraged to enable that support in your kernel since BTRFS makes an aggressive use of those. Just check you have enabled CRC32c INTEL hardware acceleration in Cryptographic API either as a module or as a built-in feature

Recovering an apparent dead BTRFS filesystem

Dealing with a filesystem metadata coherence is a critical in a filesystem design. Losing some data blocks (i.e. having some corrupted files) is less critical than having a screwed-up and unmountable filesystem especially if you do backups on a regular basis (the rule with BTRFS is *do backups*, BTRFS has no mature filesystem repair tool and you *will* end up in having to re-create your filesystem from scratch again sooner or later).

Mounting with recovery option (Linux 3.2 and beyond)

If you are using Linux 3.2 and later (only!), you can use the recovery option to make BTRFS seek for a usable copy of tree root (several copies of it exists on the disk). Just mount your filesystem as:

root # mount -o recovery /dev/yourBTFSvolume /mount/point

btrfs-select-super / btrfs-zero-log

Two other handy tools exist but they are not deployed by default by sys-fs/btrfs-progs (even btrfs-progs-9999) ebuilds because they only lie in the branch "next" of the btrfs-progs Git repository:

  • btrfs-select-super
  • btrfs-zero-log

Building the btrfs-progs goodies

The two tools this section is about are not build by default and Funtoo ebuilds does not build them as well for the moment. So you must build them manually:

root # mkdir ~/src
root # cd ~/src
root # git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git 
root # cd btrfs-progs
root # make && make btrfs-select-super && make btrfs-zero-log
   Note

In the past, btrfs-select-super and btrfs-zero-log were lying in the git-next branch, this is no longer the case and those tools are available in the master branch

Fixing dead superblock

In case of a corrupted superblock, start by asking btrfsck to use an alternate copy of the superblock instead of the superblock #0. This is achieved via the -s option followed by the number of the alternate copy you wish to use. In the following example we ask for using the superblock copy #2 of /dev/sda7:

root # ./btrfsck --s 2 /dev/sd7

When btrfsck is happy, use btrfs-super-select to restore the default superblock (copy #0) with a clean copy. In the following example we ask for restoring the superblock of /dev/sda7 with its copy #2:

root # ./btrfs-super-select -s 2  /dev/sda7

Note that this will overwrite all the other supers on the disk, which means you really only get one shot at it.

If you run btrfs-super-select prior prior to figuring out which one is good, you've lost your chance to find a good one.

Clearing the BTRFS journal

This will only help with one specific problem!

If you are unable to mount a BTRFS partition after a hard shutdown, crash or power loss, it may be due to faulty log playback in kernels prior to 3.2. The first thing to try is updating your kernel, and mounting. If this isn't possible, an alternate solution lies in truncating the BTRFS journal, but only if you see "replay_one_*" functions in the oops callstack.

To truncate the journal of a BTRFS partition (and thereby lose any changes that only exist in the log!), just give the filesystem to process to btrfs-zero-log:

root # ./btrfs-zero-log /dev/sda7

This is not a generic technique, and works by permanently throwing away a small amount of potentially good data.

Using btrfsck

   Warning

Extremely experimental...

If one thing is famous in the BTRFS world it would be the so-wished fully functional btrfsck. A read-only version of the tool was existing out there for years, however at the begining of 2012, BTRFS developers made a public and very experimental release: the secret jewel lies in the branch dangerdonteveruse of the BTRFS Git repository hold by Chris Mason on kernel.org.

root # git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
root # cd btrfs-progs
root # git checkout dangerdonteveruse
root # make

So far the tool can:

  • Fix errors in the extents tree and in blocks groups accounting
  • Wipe the CRC tree and create a brand new one (you can to mount the filesystem with CRC checking disabled )

To repair:

root # btrfsck --repair /dev/''yourBTRFSvolume''

To wipe the CRC tree:

root # btrfsck --init-csum-tree /dev/''yourBTRFSvolume''

Two other options exist in the source code: --super (equivalent of btrfs-select-super ?) and --init-extent-tree (clears out any extent?)

Final words

We give the great lines here but BTRFS can be very tricky especially when several subvolumes coming from several BTRFS volumes are used. And remember: BTRFS is still experimental at date of writing :)

Lessons learned

  • Very interesting but still lacks some important features present in ZFS like RAID-Z, virtual volumes, management by attributes, filesystem streaming, etc.
  • Extremly interesting for Gentoo/Funtoo systems partitions (snapshot/rollback capabilities). However not integrated in portage yet.
  • If possible, use a file monitoring tool like TripWire this is handy to see what file has been corrupted once the filesystem is recovered or if a bug happens
  • It is highly advised to not use the root subvolume when deploying a new Funtoo instance or put any kind of data on it in a more general case. Rolling back a data snapshot will be much easier and much less error prone (no copy process, just a matter of 'swapping' the subvolumes).
  • Backup, backup backup your data! ;)


   Note

Browse all our available articles below. Use the search field to search for topics and keywords in real-time.

Article Subtitle
Article Subtitle
Awk by Example, Part 1 An intro to the great language with the strange name
Awk by Example, Part 2 Records, loops, and arrays
Awk by Example, Part 3 String functions and ... checkbooks?
Bash by Example, Part 1 Fundamental programming in the Bourne again shell (bash)
Bash by Example, Part 2 More bash programming fundamentals
Bash by Example, Part 3 Exploring the ebuild system
BTRFS Fun
Funtoo Filesystem Guide, Part 1 Journaling and ReiserFS
Funtoo Filesystem Guide, Part 2 Using ReiserFS and Linux
Funtoo Filesystem Guide, Part 3 Tmpfs and Bind Mounts
Funtoo Filesystem Guide, Part 4 Introducing Ext3
Funtoo Filesystem Guide, Part 5 Ext3 in Action
GUID Booting Guide
Learning Linux LVM, Part 1 Storage management magic with Logical Volume Management
Learning Linux LVM, Part 2 The cvs.gentoo.org upgrade
Libvirt
Linux Fundamentals, Part 1
Linux Fundamentals, Part 2
Linux Fundamentals, Part 3
Linux Fundamentals, Part 4
LVM Fun
Making the Distribution, Part 1
Making the Distribution, Part 2
Making the Distribution, Part 3
Maximum Swappage Getting the most out of swap
On screen annotation Write on top of apps on your screen
OpenSSH Key Management, Part 1 Understanding RSA/DSA Authentication
OpenSSH Key Management, Part 2 Introducing ssh-agent and keychain
OpenSSH Key Management, Part 3 Agent Forwarding
Partition Planning Tips Keeping things organized on disk
Partitioning in Action, Part 1 Moving /home
Partitioning in Action, Part 2 Consolidating data
POSIX Threads Explained, Part 1 A simple and nimble tool for memory sharing
POSIX Threads Explained, Part 2
POSIX Threads Explained, Part 3 Improve efficiency with condition variables
Sed by Example, Part 1
Sed by Example, Part 2
Sed by Example, Part 3
Successful booting with UUID Guide to use UUID for consistent booting.
The Gentoo.org Redesign, Part 1 A site reborn
The Gentoo.org Redesign, Part 2 The Documentation System
The Gentoo.org Redesign, Part 3 The New Main Pages
The Gentoo.org Redesign, Part 4 The Final Touch of XML
Traffic Control
Windows 10 Virtualization with KVM