May
19 2016
Introduction
This part continues on the previous post. Except I’ve loaded up some extra virtual disks for testing RAID5. With BTRFS RAID5 is the most experimental part. With being just implemented in kernel 3.19.
The tests
Test One: Setting up the RAID5
So let’s look at our setup. In this case I have three 6GB disks.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 6G 0 disk
├─sda1 8:1 0 243M 0 part /boot
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 5.8G 0 part
├─btrfs--vg-root (dm-0) 252:0 0 4.8G 0 lvm /
└─btrfs--vg-swap_1 (dm-1) 252:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 6G 0 disk
sdc 8:32 0 6G 0 disk
sdd 8:48 0 6G 0 disk |
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 6G 0 disk
├─sda1 8:1 0 243M 0 part /boot
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 5.8G 0 part
├─btrfs--vg-root (dm-0) 252:0 0 4.8G 0 lvm /
└─btrfs--vg-swap_1 (dm-1) 252:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 6G 0 disk
sdc 8:32 0 6G 0 disk
sdd 8:48 0 6G 0 disk
So let’s create a BTRFS RAID5 on top of those three.
sudo mkfs.btrfs -d raid5 -m raid5 -L disk-raid5 /dev/sdb /dev/sdc /dev/sdd |
sudo mkfs.btrfs -d raid5 -m raid5 -L disk-raid5 /dev/sdb /dev/sdc /dev/sdd
Label: 'disk-raid5' uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 3 FS bytes used 112.00KiB
devid 1 size 6.00GiB used 1.23GiB path /dev/sdb
devid 2 size 6.00GiB used 1.21GiB path /dev/sdc
devid 3 size 6.00GiB used 1.21GiB path /dev/sdd |
Label: 'disk-raid5' uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 3 FS bytes used 112.00KiB
devid 1 size 6.00GiB used 1.23GiB path /dev/sdb
devid 2 size 6.00GiB used 1.21GiB path /dev/sdc
devid 3 size 6.00GiB used 1.21GiB path /dev/sdd
As you can see all devices are connected and assigned to the raid5.
Now let’s add it to our fstab file.
Use the UUID ‘5e8d29ae-aea8-4460-a049-fae62e9994fd’ from the ‘fi show’ command.
UUID=5e8d29ae-aea8-4460-a049-fae62e9994fd /media/btrfs-raid5 btrfs defaults 0 0 |
UUID=5e8d29ae-aea8-4460-a049-fae62e9994fd /media/btrfs-raid5 btrfs defaults 0 0
Now create our mountpiunt.
sudo mkdir -p /media/btrfs-raid5 |
sudo mkdir -p /media/btrfs-raid5
And test is all works with a reboot.
After rebooting we should see our new filesystem. Note that it will show 18GB instead of 12GB. The program ‘df’ has a terrible way to report space used for BTRFS filesystems.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 43% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 4.0K 477M 1% /dev
tmpfs 98M 984K 97M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sdb 18G 17M 16G 1% /media/btrfs-raid5
/dev/sda1 236M 100M 124M 45% /boot |
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 43% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 4.0K 477M 1% /dev
tmpfs 98M 984K 97M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sdb 18G 17M 16G 1% /media/btrfs-raid5
/dev/sda1 236M 100M 124M 45% /boot
Enabling Samba
Add our ‘/media/btrfs-raid5’ as a samba share so we can add some files.
sudo nano /etc/samba/smb.conf |
sudo nano /etc/samba/smb.conf
[btrfs-raid5]
comment = Test BTRFS RAID 5
browseable = yes
path = /media/btrfs-raid5
valid users = btrfs
writable = yes |
[btrfs-raid5]
comment = Test BTRFS RAID 5
browseable = yes
path = /media/btrfs-raid5
valid users = btrfs
writable = yes
Assign correct rights
sudo chown -R btrfs:btrfs /media/btrfs-raid5/ |
sudo chown -R btrfs:btrfs /media/btrfs-raid5/
Restart the service to apply our changes
sudo service smbd restart |
sudo service smbd restart
Now we can copy some test data to the disks
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 44% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 4.0K 477M 1% /dev
tmpfs 98M 1.3M 97M 2% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sdb 18G 5.3G 7.6G 42% /media/btrfs-raid5
/dev/sda1 236M 100M 124M 45% /boot |
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 44% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 4.0K 477M 1% /dev
tmpfs 98M 1.3M 97M 2% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sdb 18G 5.3G 7.6G 42% /media/btrfs-raid5
/dev/sda1 236M 100M 124M 45% /boot
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 3 FS bytes used 5.28GiB
devid 1 size 6.00GiB used 3.95GiB path /dev/sdb
devid 2 size 6.00GiB used 3.93GiB path /dev/sdc
devid 3 size 6.00GiB used 3.93GiB path /dev/sdd |
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 3 FS bytes used 5.28GiB
devid 1 size 6.00GiB used 3.95GiB path /dev/sdb
devid 2 size 6.00GiB used 3.93GiB path /dev/sdc
devid 3 size 6.00GiB used 3.93GiB path /dev/sdd
Test Two: Expanding the RAID5
For this test I’ve shut down the machine and added an extra disk ‘sde’.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 6G 0 disk
├─sda1 8:1 0 243M 0 part /boot
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 5.8G 0 part
├─btrfs--vg-root (dm-0) 252:0 0 4.8G 0 lvm /
└─btrfs--vg-swap_1 (dm-1) 252:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 6G 0 disk
sdc 8:32 0 6G 0 disk
sdd 8:48 0 6G 0 disk
sde 8:64 0 6G 0 disk |
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 6G 0 disk
├─sda1 8:1 0 243M 0 part /boot
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 5.8G 0 part
├─btrfs--vg-root (dm-0) 252:0 0 4.8G 0 lvm /
└─btrfs--vg-swap_1 (dm-1) 252:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 6G 0 disk
sdc 8:32 0 6G 0 disk
sdd 8:48 0 6G 0 disk
sde 8:64 0 6G 0 disk
Next add it to the mount point ‘/media/btrfs-raid5’. In our case we can add the device also to sdb. (As reported by df to be the disk for this mountpoint.)
sudo btrfs device add /dev/sde /media/btrfs-raid5 |
sudo btrfs device add /dev/sde /media/btrfs-raid5
Once added you can query the filesystem to see if it’s there.
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 1 size 6.00GiB used 3.95GiB path /dev/sdb
devid 2 size 6.00GiB used 3.93GiB path /dev/sdc
devid 3 size 6.00GiB used 3.93GiB path /dev/sdd
devid 4 size 6.00GiB used 0.00 path /dev/sde |
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 1 size 6.00GiB used 3.95GiB path /dev/sdb
devid 2 size 6.00GiB used 3.93GiB path /dev/sdc
devid 3 size 6.00GiB used 3.93GiB path /dev/sdd
devid 4 size 6.00GiB used 0.00 path /dev/sde
What is noticable is that there isn’t any data on this disk once we add it. Do note that in BTRFS you are responsible for balancing out your RAID5 once you start adding disks. Before we balance let’s verify the md5sums so we can see if the balance did any harm.
md5sum /media/btrfs-raid5/* |
md5sum /media/btrfs-raid5/*
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4 |
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4
Next start the balance.
sudo btrfs balance start /media/btrfs-raid5 |
sudo btrfs balance start /media/btrfs-raid5
Once done we should see that the data is evenly spread across the disks.
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 1 size 6.00GiB used 2.56GiB path /dev/sdb
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path /dev/sdd
devid 4 size 6.00GiB used 2.56GiB path /dev/sde |
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 1 size 6.00GiB used 2.56GiB path /dev/sdb
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path /dev/sdd
devid 4 size 6.00GiB used 2.56GiB path /dev/sde
Now let’s generate our md5sums again to see if the data is changed.
md5sum /media/btrfs-raid5/* |
md5sum /media/btrfs-raid5/*
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4 |
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4
Still all good.
Test Three: Replacing a disk
So let’s replace our our first disk sdb. I chose this disk as it represents the mount point in the df command. Ideal test case to break.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 6G 0 disk
├─sda1 8:1 0 243M 0 part /boot
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 5.8G 0 part
├─btrfs--vg-root (dm-0) 252:0 0 4.8G 0 lvm /
└─btrfs--vg-swap_1 (dm-1) 252:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 6G 0 disk
sdc 8:32 0 6G 0 disk
sdd 8:48 0 6G 0 disk
sde 8:64 0 6G 0 disk
sdf 8:80 0 6G 0 disk |
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 6G 0 disk
├─sda1 8:1 0 243M 0 part /boot
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 5.8G 0 part
├─btrfs--vg-root (dm-0) 252:0 0 4.8G 0 lvm /
└─btrfs--vg-swap_1 (dm-1) 252:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 6G 0 disk
sdc 8:32 0 6G 0 disk
sdd 8:48 0 6G 0 disk
sde 8:64 0 6G 0 disk
sdf 8:80 0 6G 0 disk
For RAID5 you can add and remove. Although I would recommend replace. But as it is the aim to see if everything breaks, I am going ahead with the add / delete option.
sudo btrfs device add /dev/sdf /media/btrfs-raid5 |
sudo btrfs device add /dev/sdf /media/btrfs-raid5
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 5 FS bytes used 5.28GiB
devid 1 size 6.00GiB used 2.56GiB path /dev/sdb
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path /dev/sdd
devid 4 size 6.00GiB used 2.56GiB path /dev/sde
devid 5 size 6.00GiB used 0.00 path /dev/sdf |
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 5 FS bytes used 5.28GiB
devid 1 size 6.00GiB used 2.56GiB path /dev/sdb
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path /dev/sdd
devid 4 size 6.00GiB used 2.56GiB path /dev/sde
devid 5 size 6.00GiB used 0.00 path /dev/sdf
Now delete the old /dev/sdb.
sudo btrfs device delete /dev/sdb /media/btrfs-raid5 |
sudo btrfs device delete /dev/sdb /media/btrfs-raid5
This event causes a rebalance. As the data won’t be redundant. Note: this could take a long time.
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path /dev/sdd
devid 4 size 6.00GiB used 2.56GiB path /dev/sde
devid 5 size 6.00GiB used 2.56GiB path /dev/sdf |
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path /dev/sdd
devid 4 size 6.00GiB used 2.56GiB path /dev/sde
devid 5 size 6.00GiB used 2.56GiB path /dev/sdf
Ok, now we are done generate the md5sums again.
md5sum /media/btrfs-raid5/* |
md5sum /media/btrfs-raid5/*
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4 |
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4
To see if my weird changes keep working I want to see if the drive comes back up once deleted.
Yup, here it is again.
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path /dev/sdd
devid 4 size 6.00GiB used 2.56GiB path /dev/sde
devid 5 size 6.00GiB used 2.56GiB path /dev/sdf |
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path /dev/sdd
devid 4 size 6.00GiB used 2.56GiB path /dev/sde
devid 5 size 6.00GiB used 2.56GiB path /dev/sdf
Test Four: Crashing a disk
In this case I physically (or virtually) disconnected a disk.
So when booting you will see the ‘An error occured while mounting /media/btrfs-raid5’. Press s to skip.
Let’s verify the filesystem.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 6G 0 disk
├─sda1 8:1 0 243M 0 part /boot
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 5.8G 0 part
├─btrfs--vg-root (dm-0) 252:0 0 4.8G 0 lvm /
└─btrfs--vg-swap_1 (dm-1) 252:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 6G 0 disk
sdc 8:32 0 6G 0 disk
sdd 8:48 0 6G 0 disk
sde 8:64 0 6G 0 disk |
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 6G 0 disk
├─sda1 8:1 0 243M 0 part /boot
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 5.8G 0 part
├─btrfs--vg-root (dm-0) 252:0 0 4.8G 0 lvm /
└─btrfs--vg-swap_1 (dm-1) 252:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 6G 0 disk
sdc 8:32 0 6G 0 disk
sdd 8:48 0 6G 0 disk
sde 8:64 0 6G 0 disk
So in our case we disconnected disk ‘sdf’ but this is a false report (see later). Let’s verify if it is still mounted (shouldn’t be there, as we said skip).
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 44% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 4.0K 477M 1% /dev
tmpfs 98M 1.2M 97M 2% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 100M 124M 45% /boot |
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 44% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 4.0K 477M 1% /dev
tmpfs 98M 1.2M 97M 2% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 100M 124M 45% /boot
Now let’s inspect our BTRFS filesystem.
Label: 'disk-raid5' uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.59GiB path /dev/sdc
devid 4 size 6.00GiB used 2.59GiB path /dev/sdd
devid 5 size 6.00GiB used 2.56GiB path /dev/sde
*** Some devices missing |
Label: 'disk-raid5' uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.59GiB path /dev/sdc
devid 4 size 6.00GiB used 2.59GiB path /dev/sdd
devid 5 size 6.00GiB used 2.56GiB path /dev/sde
*** Some devices missing
It shows missing devid 3. Which wasn’t sdf. But sdd. This is what is false reported. Seems like the /dev/ assignments got shuffled somehow.
So let’s see if we can repair this. Let’s try mounting it.
sudo mount -v -t btrfs LABEL=disk-raid5 /media/btrfs-raid5/ |
sudo mount -v -t btrfs LABEL=disk-raid5 /media/btrfs-raid5/
Won’t work. It seems we need a ‘-o degraded’ status.
sudo mount -v -t btrfs -o degraded LABEL=disk-raid5 /media/btrfs-raid5/ |
sudo mount -v -t btrfs -o degraded LABEL=disk-raid5 /media/btrfs-raid5/
This should work.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 44% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 12K 477M 1% /dev
tmpfs 98M 1.2M 97M 2% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 100M 124M 45% /boot
/dev/sdc 24G 5.3G 13G 31% /media/btrfs-raid5 |
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 44% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 12K 477M 1% /dev
tmpfs 98M 1.2M 97M 2% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 100M 124M 45% /boot
/dev/sdc 24G 5.3G 13G 31% /media/btrfs-raid5
When we inspect the file system, we can see the devid 3 isn’t mapped back in. However it still ‘knows’ what amount of data there should be.
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path
devid 4 size 6.00GiB used 2.56GiB path /dev/sdd
devid 5 size 6.00GiB used 2.56GiB path /dev/sde |
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 3 size 6.00GiB used 2.56GiB path
devid 4 size 6.00GiB used 2.56GiB path /dev/sdd
devid 5 size 6.00GiB used 2.56GiB path /dev/sde
Let’s verify if it has some impact on the data.
md5sum /media/btrfs-raid5/* |
md5sum /media/btrfs-raid5/*
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4 |
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4
Still all good. Perfomance seems to be severly impacted, however this will be due to the missing drive.
Now delete all missing disks from the file system.
sudo btrfs device delete missing /media/btrfs-raid5/ |
sudo btrfs device delete missing /media/btrfs-raid5/
As you can see, this causes a rebalance. So a full disk will likely fail at this point. And a replace should be used instead this approach.
Total devices 3 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.88GiB path /dev/sdc
devid 4 size 6.00GiB used 2.88GiB path /dev/sdd
devid 5 size 6.00GiB used 2.88GiB path /dev/sde |
Total devices 3 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.88GiB path /dev/sdc
devid 4 size 6.00GiB used 2.88GiB path /dev/sdd
devid 5 size 6.00GiB used 2.88GiB path /dev/sde
Let’s reuse the /dev/sdb, so wipe it first.
Now add this disk to the btrfs RAID5.
sudo btrfs device add /dev/sdb /media/btrfs-raid5 |
sudo btrfs device add /dev/sdb /media/btrfs-raid5
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.88GiB path /dev/sdc
devid 4 size 6.00GiB used 2.88GiB path /dev/sdd
devid 5 size 6.00GiB used 2.88GiB path /dev/sde
devid 6 size 6.00GiB used 0.00 path /dev/sdb |
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.88GiB path /dev/sdc
devid 4 size 6.00GiB used 2.88GiB path /dev/sdd
devid 5 size 6.00GiB used 2.88GiB path /dev/sde
devid 6 size 6.00GiB used 0.00 path /dev/sdb
Now that we have added the /dev/sdb disk we can see a severe imbalance in the files. Luckily this can be easily fixed with a balance.
sudo btrfs balance start /media/btrfs-raid5 |
sudo btrfs balance start /media/btrfs-raid5
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 4 size 6.00GiB used 2.56GiB path /dev/sdd
devid 5 size 6.00GiB used 2.56GiB path /dev/sde
devid 6 size 6.00GiB used 2.56GiB path /dev/sdb
</p
Let's check our files again
<pre lang="Bash">
md5sum /media/btrfs-raid5/* |
Label: disk-raid5 uuid: 5e8d29ae-aea8-4460-a049-fae62e9994fd
Total devices 4 FS bytes used 5.28GiB
devid 2 size 6.00GiB used 2.56GiB path /dev/sdc
devid 4 size 6.00GiB used 2.56GiB path /dev/sdd
devid 5 size 6.00GiB used 2.56GiB path /dev/sde
devid 6 size 6.00GiB used 2.56GiB path /dev/sdb
</p
Let's check our files again
<pre lang="Bash">
md5sum /media/btrfs-raid5/*
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4 |
03486548bc7b0f1a3881dc00c0f8c5f8 /media/btrfs-raid5/S01E01 FLEMISH HDTV x264.mp4
a9390aed84a6be8c145046772296db26 /media/btrfs-raid5/S01E02 FLEMISH HDTV x264.mp4
2e37ed514579ac282986efd78ac3bb76 /media/btrfs-raid5/S01E03 FLEMISH HDTV x264.mp4
1596a5e56f14c843b5c27e2d3ff27ebd /media/btrfs-raid5/S01E04 FLEMISH HDTV x264.mp4
f7d494d6858391ac5c312d141d9ee0e5 /media/btrfs-raid5/S01E05 FLEMISH HDTV x264.mp4
fe6f097ff136428bfc3e2a1b8e420e4e /media/btrfs-raid5/S01E06 FLEMISH HDTV x264.mp4
43c5314079f08570f6bb24b5d6fde101 /media/btrfs-raid5/S01E07 FLEMISH HDTV x264.mp4
3b5ea952b632bbc58f608d64667cd2a1 /media/btrfs-raid5/S01E08 FLEMISH HDTV x264.mp4
db6b8bf608de2008455b462e76b0c1dd /media/btrfs-raid5/S01E09 FLEMISH HDTV x264.mp4
0d5775373e1168feeef99889a1d8fe0a /media/btrfs-raid5/S01E10 FLEMISH HDTV x264.mp4
8dd4b25c249778f197fdb33604fdb998 /media/btrfs-raid5/S01E11 FLEMISH HDTV x264.mp4
edac6a857b137136a4d27bf6926e1287 /media/btrfs-raid5/S01E12 FLEMISH HDTV x264.mp4
To verify if the BTRFS keeps existing you can reboot.
Test Five: Byte corruption.
For this test I will fill the entire RAID with as much data as possible. (6 drives should equal to around 18 GB of data)
du -sh /media/btrfs-raid5 |
du -sh /media/btrfs-raid5
btrfs filesystem df /media/btrfs-raid5/ |
btrfs filesystem df /media/btrfs-raid5/
Data, RAID5: total=17.62GiB, used=17.02GiB
System, RAID5: total=96.00MiB, used=16.00KiB
Metadata, RAID5: total=288.00MiB, used=19.20MiB
unknown, single: total=16.00MiB, used=0.00 |
Data, RAID5: total=17.62GiB, used=17.02GiB
System, RAID5: total=96.00MiB, used=16.00KiB
Metadata, RAID5: total=288.00MiB, used=19.20MiB
unknown, single: total=16.00MiB, used=0.00
Also note that df isn’t really a great tool to calculate size free for our BTRFS parition.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 44% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 4.0K 477M 1% /dev
tmpfs 98M 1.3M 97M 2% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 100M 124M 45% /boot
/dev/sdb 24G 18G 50M 100% /media/btrfs-raid5 |
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrfs--vg-root 4.6G 1.9G 2.5G 44% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 477M 4.0K 477M 1% /dev
tmpfs 98M 1.3M 97M 2% /run
none 5.0M 0 5.0M 0% /run/lock
none 488M 0 488M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 236M 100M 124M 45% /boot
/dev/sdb 24G 18G 50M 100% /media/btrfs-raid5
Now to test this I will shutdown the machine and use wxHexEditor to corrupt some bytes of our virtual disks. This emulates a disk writing bad bytes.
So once this is done start the machine again.
cat /var/log/syslog | grep BTRFS |
cat /var/log/syslog | grep BTRFS
This shows no errors, which is normal as BTRFS hasn’t started scrubbing. To trigger a scrub, you can access the file (which will trigger the checksum). Or we can start a scrub of the disks manually.
sudo btrfs scrub start /media/btrfs-raid5/ |
sudo btrfs scrub start /media/btrfs-raid5/
Once scrubbing starts we can follow the process.
sudo watch btrfs scrub status /media/btrfs-raid5/ |
sudo watch btrfs scrub status /media/btrfs-raid5/
scrub status for 5e8d29ae-aea8-4460-a049-fae62e9994fd
scrub started at Sun Nov 22 17:08:46 2015, running for 120 seconds
total bytes scrubbed: 15.62GiB with 3 errors
error details: csum=3
corrected errors: 3, uncorrectable errors: 0, unverified errors: 0 |
scrub status for 5e8d29ae-aea8-4460-a049-fae62e9994fd
scrub started at Sun Nov 22 17:08:46 2015, running for 120 seconds
total bytes scrubbed: 15.62GiB with 3 errors
error details: csum=3
corrected errors: 3, uncorrectable errors: 0, unverified errors: 0
Now syslog will show errors popping up.
Nov 22 17:09:22 btrfs kernel: [ 261.969305] BTRFS: checksum error at logical 44498075648 on dev /dev/sdc, sector 10316768, root 5, inode 281, offset 569163776, length 4096, links 1 (path: S01E05.mkv)
Nov 22 17:09:22 btrfs kernel: [ 261.969310] BTRFS: bdev /dev/sdc errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
Nov 22 17:09:22 btrfs kernel: [ 262.159200] BTRFS: fixed up error at logical 44498075648 on dev /dev/sdc
Nov 22 17:09:28 btrfs kernel: [ 267.507804] BTRFS: checksum error at logical 48935047168 on dev /dev/sdc, sector 12507592, root 5, inode 287, offset 426938368, length 4096, links 1 (path: S02E03.mkv)
Nov 22 17:09:28 btrfs kernel: [ 267.507809] BTRFS: bdev /dev/sdc errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
Nov 22 17:09:28 btrfs kernel: [ 267.717962] BTRFS: fixed up error at logical 48935047168 on dev /dev/sdc
Nov 22 17:10:29 btrfs kernel: [ 328.740414] BTRFS: checksum error at logical 45808136192 on dev /dev/sdc, sector 11169624, root 5, inode 283, offset 555790336, length 4096, links 1 (path: S01E07.mkv) |
Nov 22 17:09:22 btrfs kernel: [ 261.969305] BTRFS: checksum error at logical 44498075648 on dev /dev/sdc, sector 10316768, root 5, inode 281, offset 569163776, length 4096, links 1 (path: S01E05.mkv)
Nov 22 17:09:22 btrfs kernel: [ 261.969310] BTRFS: bdev /dev/sdc errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
Nov 22 17:09:22 btrfs kernel: [ 262.159200] BTRFS: fixed up error at logical 44498075648 on dev /dev/sdc
Nov 22 17:09:28 btrfs kernel: [ 267.507804] BTRFS: checksum error at logical 48935047168 on dev /dev/sdc, sector 12507592, root 5, inode 287, offset 426938368, length 4096, links 1 (path: S02E03.mkv)
Nov 22 17:09:28 btrfs kernel: [ 267.507809] BTRFS: bdev /dev/sdc errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
Nov 22 17:09:28 btrfs kernel: [ 267.717962] BTRFS: fixed up error at logical 48935047168 on dev /dev/sdc
Nov 22 17:10:29 btrfs kernel: [ 328.740414] BTRFS: checksum error at logical 45808136192 on dev /dev/sdc, sector 11169624, root 5, inode 283, offset 555790336, length 4096, links 1 (path: S01E07.mkv)
So this were the lengthy RAID5 tests and the last part of BTRFS. Next up is mhddfs.
Leave a Reply
You must be logged in to post a comment.