Hi guys,
Have a nasty issue to report here. I have a RAID10 btrfs configuration where I was replacing one disk for another. Unfortunately, during the replace, it seems like one of my disks (not in the array) stopped getting powered (looks like the cable was loose) and caused the machine to lock up.
Since then, I have not been able to boot normally with the array in fstab. I’ve passed along “recovery” in my fstab line and have taken to editing in the fstab line to mount, then immediately commenting the line out (otherwise, the machine will not boot successfully; it’s a headless box, so this is painful).
It seems to try to continue the replace, but dies shortly after mounting with
kernel BUG at /home/kernel/COD/linux/fs/btrfs/volumes.c:5508
uname -a
Linux ubuntu-server 4.4.0-040400rc6-generic #201512202030 SMP Mon Dec 21 01:32:09 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
btrfs --version
btrfs-progs v4.0
btrfs fi show
Label: none uuid: 48ed8a66-731d-499b-829e-dd07dd7260cc
Total devices 9 FS bytes used 14.59TiB
devid 0 size 4.55TiB used 3.16TiB path /dev/sde
devid 1 size 4.55TiB used 3.16TiB path /dev/sdn
devid 4 size 5.46TiB used 4.07TiB path /dev/sdh
devid 5 size 5.46TiB used 4.07TiB path /dev/sdi
devid 7 size 5.46TiB used 4.07TiB path /dev/sdm
devid 8 size 5.46TiB used 4.07TiB path /dev/sdl
devid 9 size 5.46TiB used 4.07TiB path /dev/sdg
devid 10 size 5.46TiB used 4.07TiB path /dev/sdj
devid 11 size 5.46TiB used 1.63TiB path /dev/sdk
btrfs fi df /media/camino/
Data, RAID10: total=14.57TiB, used=14.57TiB
System, RAID10: total=64.00MiB, used=1.28MiB
Metadata, RAID10: total=25.47GiB, used=23.80GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
dmesg output attached.
Attachment:
dmesg.log
Description: Binary data
Hope you guys have some ideas! Thanks, Asif
