Hallo, Jim,
Du meintest am 05.06.12:
> /dev/sda 11T 4.9T 6.0T 46% /btrfs
> [root@advanced ~]# btrfs fi show
> failed to read /dev/sr0
> Label: none uuid: c21f1221-a224-4ba4-92e5-cdea0fa6d0f9
> Total devices 12 FS bytes used 4.76TB
> devid 6 size 930.99GB used 429.32GB path /dev/sdf
> devid 5 size 930.99GB used 429.32GB path /dev/sde
> devid 8 size 930.99GB used 429.32GB path /dev/sdh
> devid 9 size 930.99GB used 429.32GB path /dev/sdi
> devid 4 size 930.99GB used 429.32GB path /dev/sdd
> devid 3 size 930.99GB used 429.32GB path /dev/sdc
> devid 11 size 930.99GB used 429.08GB path /dev/sdk
> devid 2 size 930.99GB used 429.32GB path /dev/sdb
> devid 10 size 930.99GB used 429.32GB path /dev/sdj
> devid 12 size 930.99GB used 429.33GB path /dev/sdl
> devid 7 size 930.99GB used 429.32GB path /dev/sdg
> devid 1 size 930.99GB used 429.09GB path /dev/sda
> Btrfs v0.19-35-g1b444cd
> df -h and btrfs fi show seem to be in good size agreement. Btrfs was
> created as raid1 metadata and raid0 data. I would like to delete the
> last 4 drives leaving 7T of space to hold 4.9T of data. My plan
> would be to remove /dev/sdi, j, k, l one at a time. After all are
> deleted run "btrfs fi balance /btrfs".
I'd prefer
btrfs device delete /dev/sdi
btrfs filesystem balance /btrfs
btrfs device delete /dev/sdj
btrfs filesystem balance /btrfs
etc. - after every "delete" its "balance" run.
That may take a lot of hours - I use the last lines of "dmesg" to
extrapolate the needed time (btrfs produces a message about every
minute).
And you can't use the console from where you have started the "balance"
command. Therefore I wrap this command:
echo 'btrfs filesystem balance /btrfs' | at now
Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html