Hi,
I have done three full balances in a row, each of them ending with an
error, telling me:
BTRFS info (device nvme1n1p1): 2 enospc errors during balance
BTRFS info (device nvme1n1p1): balance: ended with status: -28
(first balance run it was 4 enospc errors).
The filesystem has enough space to spare, though:
# btrfs fi show /
Label: none uuid: 34ea0387-af9a-43b3-b7cc-7bdf7b37b8f1
Total devices 1 FS bytes used 624.36GiB
devid 1 size 931.51GiB used 627.03GiB path /dev/nvme1n1p1
# btrfs fi df /
Data, single: total=614.00GiB, used=613.72GiB
System, single: total=32.00MiB, used=112.00KiB
Metadata, single: total=13.00GiB, used=10.64GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
This is after the balances, but was about the same before the balances.
Before them, data had about 50GB diff between total and used.
The volume contains subvolumes (/ and /home) and snapshots (around 20
per subvolume, 40 total, oldest 1 month old).
My questions are:
1. why do I get enospc errors on a device that has enough spare space?
2. is this bad and if yes, how can I fix it?
A little more (noteworthy) context, if you're interested:
The reason I started the first balance was that a df on the filesystem
showed 0% free space:
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/nvme1n1p1 976760584 655217424 0 100% /
...
and a big download (chromium sources) was aborted due to "not enough
space on device".
I monitored the first balance more closely, and right after the start,
df looked normal again, showing available blocks, but during the
balance, it flip-flopped a couple of times between again showing 0
available bytes and showing the complement between actual size and used
bytes. I did not observe this behavior any more during balance 2 and 3,
but did not observe as closely.
TiA for any insights and ideas on how to proceed and a healthy start
into the new year for everyone.
Attachment:
signature.asc
Description: OpenPGP digital signature
