Hallo,
I originally had RAID with six 4TB drives, which was more than 80
percent full. So now I bought
a 10TB drive, added it to the Array and gave the command to remove the
oldest drive in the array.
btrfs device delete /dev/sda /mnt/btrfs-raid
I kept a terminal with "watch btrfs fi show" open and It showed that
the size of /dev/sda had been set to zero and that data was being
redistributed to the other drives. All seemed well, but now the
process stalls at 8GB being left on /dev/sda/. It also seems that the
size of the drive has been reset the original value of 3,64TiB.
Label: none uuid: 1609e4e1-4037-4d31-bf12-f84a691db5d8
Total devices 7 FS bytes used 8.07TiB
devid 1 size 3.64TiB used 8.00GiB path /dev/sda
devid 2 size 3.64TiB used 2.73TiB path /dev/sdc
devid 3 size 3.64TiB used 2.73TiB path /dev/sdd
devid 4 size 3.64TiB used 2.73TiB path /dev/sde
devid 5 size 3.64TiB used 2.73TiB path /dev/sdf
devid 6 size 3.64TiB used 2.73TiB path /dev/sdg
devid 7 size 9.10TiB used 2.50TiB path /dev/sdb
I see no more btrfs worker processes and no more activity in iotop.
How do I proceed? I am using a current debian stretch which uses
Kernel 4.9.0-8 and btrfs-progs 4.7.3-1.
How should I proceed? I have a Backup but would prefer an easier and
less time-comsuming way out of this mess.
Yours
Stefan