On Mon, Jan 13, 2020 at 3:28 PM Christian Kujau <lists@xxxxxxxxxxxxxxx> wrote:
>
> Hi,
>
> I realize that this comes up every now and then but always for slightly
> more complicated setups, or so I thought:
>
>
> ============================================================
> # df -h /
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/luks-root 825G 389G 0 100% /
>
> # btrfs filesystem show /
> Label: 'root' uuid: 75a6d93a-5a5c-48e0-a237-007b2e812477
> Total devices 1 FS bytes used 388.00GiB
> devid 1 size 824.40GiB used 395.02GiB path /dev/mapper/luks-root
>
> # blockdev --getsize64 /dev/mapper/luks-root | awk '{print $1/1024^3, "GB"}'
> 824.398 GB
>
> # btrfs filesystem df /
> Data, single: total=388.01GiB, used=387.44GiB
> System, single: total=4.00MiB, used=64.00KiB
> Metadata, single: total=2.01GiB, used=1.57GiB
> GlobalReserve, single: total=512.00MiB, used=80.00KiB
> ============================================================
>
>
> This is on a Fedora 31 (5.4.8-200.fc31.x86_64) workstation. Where did the
> other 436 GB go? Or, why are only 395 GB allocated from the 824 GB device?
It's a reporting bug. File system is fine.
> I'm running a --full-balance now and it's progressing, slowly. I've seen
> tricks on the interwebs to temporarily add a ramdisk, run another balance,
> remove the ramdisk again - but that seems hackish.
I'd stop the balance. Balancing metadata in particular appears to make
the problem more common. And you're right, it's hackish, it's not a
great work around for anything these days, and if it is, good chance
it's a bug.
> Isn't there a way to prevent this from happening? (Apart from better
> monitoring, so I can run the balance at an earlier stage next time).
In theory it should be enough to unmount then remount the file system;
of course for sysroot that'd be a reboot. There may be certain
workloads that encourage it, that could be worked around temporarily
using mount option metadata_ratio=1.
--
Chris Murphy