Dealing with lost disks in a btrfs array

Last weekend I moved the intestines of my computer from one case to another bigger one, the reason being I needed to plug in my nvidia GPU card for deep learning. In the process, somehow I lost (temporarily) the ability to connect one PCIe NVMe converter and thus one disk from my btrfs multi device array. Despite my expectation of a huge disaster, it turned out rather smooth to recover and get running again.

So, after moving all the pieces from one case to the other, and taking out one disk that was part of the btrfs disk array which houses all my Linux installation, I booted the first time into linux and was greeted by the initramfs with the message that it cannot mount the root filesystem. Not surprising, btrfs rejected to mount the array due to one disk missing.

Fortunately, the initramfs brings enough useful tools to fix this, well, in this case only the btrfs binary is necessary. Let us first look at the output of btrfs:

(initramfs) btrfs fi show
warning, device 6 is missing
Label: none  uuid: XXXX....
        Total devices 7 FS bytes used 2.52TiB
        devid    1 size ...
        devid    2 size ...
        ...
        devid    5 size ...
        devid    7 size ...
        *** Some devices missing

(initramfs)

Yes, well, that was somehow something I expected. Fortunately, as I wrote in the previous blog, all data and metadata is raid1, so we should be safe.

The usual next step is mounting the disk array in degraded mode:

(initramfs) mkdir /abc
(initramfs) mount -o degraded,rw /dev/sdb3 /abc
[ nnn ] fuseblk: Unknown parameter 'degraded'
[ nnn ] BTRFS info (device sdb3): allowing degraded mounts
[ nnn ] BTRFS info (device sdb3): disk space caching is enabled
[ nnn ] BTRFS info (device sdb3): has skinny extents
[ nnn ] BTRFS warning (device sdb3): device 6 uuid XXXXX is missing
[ nnn ] BTRFS info (device sdb3): enabling ssd optimizations
(initramfs)

So that worked out, filesystem mounted. The next step was, since I couldn’t replace the device with a new for some time, removing the missing device from the array:

(initramfs) btrfs dev del missing /abc
[ nnn ] BTRFS info (device sdb3): relocation block group NNNN flags data|raid1
[ nnn ] BTRFS info (device sdb3): found 4217 extents, stage: move data extents
[ nnn ] BTRFS info (device sdb3): stage: update data pointers
...

That was again as expected, btrfs removed the device, and since all my data and metadata was indicated to be on raid1, btrfs rearranged/copied the lost blocks from the remaining copy to free space in the array (on different disks). After this process, which took quite some time, all the (meta)data was available two times on different disks.

With that, the action was finished: I unmounted the array from /abc, and rebooted back into Linux, which brought me to the normal login screen, everything as usual, only slightly reduced disk space.

Later on I finally received a PCIe x1 NVMe extender which allowed me to connect the last disk which was lying around on my table. I thought I can add it as usual to the btrfs array with

btrfs device add /dev/nvme0n1p1 /

but this didn’t work out, because btrfs recognized an already available btrfs system there (well, it was part of the array), and even adding -f didn’t work (it actually should have, from what I read). So I used fdisk to create a partition table and wipe out some of the btrfs data, and after that a

btrfs device add -f /dev/nvme0n1p1 /

(mind the -f) worked. So we are back at the previous state, albeit not balanced as of now. But the btrfs-tools package ships a rebalance systemd trigger which I have activated, so at some point in the future it will be rebalanced. In the meantime, new date will go mostly to this disk.

So all in all, it was surprisingly rather straight-forward and simple to get back to a working system. Again, big thanks to the btrfs and all open source developers!

1 Response

  1. Ferry says:

    Hi,

    to wipe out some data / clear filesystem / partition markers you can use wipefs.

    wipefs -a /dev/nvme0n1
    Will do wonders. Your partition will be gone after that 😉

Leave a Reply

Your email address will not be published. Required fields are marked *