On Mon, Aug 29, 2016 at 9:05 PM, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
> On Mon, Aug 29, 2016 at 10:04 AM, ojab // <ojab@xxxxxxx> wrote:
> What do you get for 'btrfs fi us <mp>'
$ sudo btrfs fi us /mnt/xxx/
Overall:
Device size: 3.64TiB
Device allocated: 1.82TiB
Device unallocated: 1.82TiB
Device missing: 0.00B
Used: 1.81TiB
Free (estimated): 1.83TiB (min: 943.55GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID0: Size:1.81TiB, Used:1.80TiB
/dev/sdb1 928.48GiB
/dev/sdc1 928.48GiB
Metadata,RAID1: Size:3.00GiB, Used:2.15GiB
/dev/sdb1 3.00GiB
/dev/sdc1 3.00GiB
System,RAID1: Size:32.00MiB, Used:176.00KiB
/dev/sdb1 32.00MiB
/dev/sdc1 32.00MiB
Unallocated:
/dev/sdb1 1.01MiB
/dev/sdc1 1.00MiB
/dev/sdd1 1.82TiB
>
> You can see what the state of block groups are with btrfs-debugfs
> which is in kdave btrfs-progs git. Chances are you need a larger
> value, -dusage=15 -musage=15 to free up space on devid 1 and 2. Then
> maybe devid 3 can be removed.
btrfs-debugfs output:
https://gist.github.com/ojab/a3c59983e8fb6679b8fdc0e88c0c9e60
Before `delete` the was about 60Gb of free space, looks like it was
filled during `delete` (I've seen similar behavior during `btrfs fi
defrag`) and I should use `-dusage=69` and up.
I don't quite understand what exactly btrfs is trying to do: I assume
that block groups should be relocated to the new/empty drive, but
during the delete `btrfs fi us` shows
Unallocated:
/dev/sdc1 16.00EiB
so deleted partition is counted as maximum possible empty drive and
blocks are relocated to it instead of new/empty drive? (kernel-4.7.2 &
btrfs-progs-4.7.1 here)
Is there any way to see where and why block groups are relocated
during `delete`?
//wbr ojab
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html