if, before the first balance run, many blocks were filled only in parts, a behavior like this can be happen, because btrfs aggregate the used space in a new block. a second full balance run, should solve this. you didn't need to move data away. btrfs only need a few GB of unalloceted space for balancing. (see the output of "btrfs fi us /mnt") In generell i prefer a small balance-script like: btrfs balance start -musage=50 -dusage=50 /mnt btrfs balance start -musage=90 -dusage=90 /mnt btrfs balance start /mnt this aggregate all blocks that are filled to less than 50%, the the same with blocks that are filled to less than 90%. the last line will do a full rebalance if you real need. (this moves ALL blocks) Am Mi., 9. Jan. 2019 um 18:23 Uhr schrieb Karsten Vinding <karstenvinding@xxxxxxxxx>: > > Just a short answer. > > I didn't use the replace command. > > I added the new drive to the pool / array, and checked that it was > registered. > Following that I removed the 1TB drive with "btrfs device delete <drive>". > > As far as I know this should avoid the need to resize the new drive. > > "btrfs fi us" shows all the space as available. The drive shows 1.7TB as > unallocated. > > I have started moving some data of the array to give btrfs some more > room to move, and will follow what happens when I try a new balance > later on.
