Hi Michał
On 6/7/20 12:50 PM, Michał Mirosław wrote:
On Sun, Jun 07, 2020 at 12:09:30PM +0200, Goffredo Baroncelli wrote:
[...]
# btrfs filesystem usage .
Overall:
Device size: 1.82TiB
Device allocated: 932.51GiB
Device unallocated: 930.49GiB
Device missing: 0.00B
Used: 927.28GiB
Free (estimated): 933.86GiB (min: 468.62GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,single: Size:928.47GiB, Used:925.10GiB
/dev/mapper/btrfs1 927.47GiB
/dev/mapper/btrfs2 1.00GiB
Metadata,RAID1: Size:12.00MiB, Used:1.64MiB
/dev/mapper/btrfs1 12.00MiB
/dev/mapper/btrfs2 12.00MiB
Metadata,DUP: Size:2.00GiB, Used:1.09GiB
/dev/mapper/btrfs1 4.00GiB
System,DUP: Size:8.00MiB, Used:144.00KiB
/dev/mapper/btrfs1 16.00MiB
Unallocated:
/dev/mapper/btrfs1 1.02MiB
/dev/mapper/btrfs2 930.49GiB
The old disk is full. And the fact that Metadata has a raid1 profile prevent further metadata allocation/reshape.
The filesystem goes RO after the mount ? If no a simple balance of metadata should be enough; pay attention to select
"single" profile for metadata for this first attempt.
# btrfs balance start -mconvert=single <mnt-point>
This should free about 4G from the old disk. Then, balance the data
# btrfs balance start -d <mnt-point>
Then rebalance the metadata as raid1, because now you should have enough space.
# btrfs balance start -mconvert=raid1 <mnt-point>
Thanks! It worked all right! (data rebalance wasn't needed.)
Which metadata profile will you set ?
If you set a RAID1 metadata profile, in the long term you will face the same problem. And even if you use a different metadata profile than RAID1, I suggest to switch to RAID1 as metadata profile.
From the "btrfs fi us" output, balance the data is not an high urgency, however I strongly suggest to do soon.
Best Regards,
Michał Mirosław
--
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D 17B2 0EDA 9B37 8B82 E0B5