Re: balance + ENOFS -> readonly filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/7/20 10:34 AM, Michał Mirosław wrote:
On Sun, Jun 07, 2020 at 03:35:36PM +0800, Qu Wenruo wrote:
On 2020/6/7 下午1:12, Michał Mirosław wrote:
Dear btrfs developers,

I just added a new disk to already almost full filesystem and tried to
enable raid1 for metadata (transcript below).
May I ask for your per-disk usage?

There is a known bug (but rare to hit) that completely unbalance disk
usage can lead to unexpected ENOSPC (-28) error at certain critical code
and cause the transaction abort you're hitting.

If you have added a new disk to an almost full one, then I guess that
would be the case...

# btrfs filesystem usage .
Overall:
     Device size:                   1.82TiB
     Device allocated:            932.51GiB
     Device unallocated:          930.49GiB
     Device missing:                  0.00B
     Used:                        927.28GiB
     Free (estimated):            933.86GiB      (min: 468.62GiB)
     Data ratio:                       1.00
     Metadata ratio:                   2.00
     Global reserve:              512.00MiB      (used: 0.00B)

Data,single: Size:928.47GiB, Used:925.10GiB
    /dev/mapper/btrfs1         927.47GiB
    /dev/mapper/btrfs2           1.00GiB

Metadata,RAID1: Size:12.00MiB, Used:1.64MiB
    /dev/mapper/btrfs1          12.00MiB
    /dev/mapper/btrfs2          12.00MiB

Metadata,DUP: Size:2.00GiB, Used:1.09GiB
    /dev/mapper/btrfs1           4.00GiB

System,DUP: Size:8.00MiB, Used:144.00KiB
    /dev/mapper/btrfs1          16.00MiB

Unallocated:
    /dev/mapper/btrfs1           1.02MiB
    /dev/mapper/btrfs2         930.49GiB

The old disk is full. And the fact that Metadata has a raid1 profile prevent further metadata allocation/reshape.
The filesystem goes RO after the mount ? If no a simple balance of metadata should be enough; pay attention to select
"single" profile for metadata for this first attempt.

# btrfs balance start -mconvert=single <mnt-point>

This should free about 4G from the old disk. Then, balance the data

# btrfs balance start -d <mnt-point>

Then rebalance the metadata as raid1, because now you should have enough space.

# btrfs balance start -mconvert=raid1 <mnt-point>




The operation failed and
left the filesystem in readonly state. Is this expected?

Definitely not.

If your disk layout fits my assumption, then the following patchset is
worth trying:
https://patchwork.kernel.org/project/linux-btrfs/list/?series=297005

I'll give it a try.

Best Regards,
Michał Mirosław



--
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux