On Sun, Jun 07, 2020 at 03:35:36PM +0800, Qu Wenruo wrote:
> On 2020/6/7 下午1:12, Michał Mirosław wrote:
> > Dear btrfs developers,
> >
> > I just added a new disk to already almost full filesystem and tried to
> > enable raid1 for metadata (transcript below).
> May I ask for your per-disk usage?
>
> There is a known bug (but rare to hit) that completely unbalance disk
> usage can lead to unexpected ENOSPC (-28) error at certain critical code
> and cause the transaction abort you're hitting.
>
> If you have added a new disk to an almost full one, then I guess that
> would be the case...
# btrfs filesystem usage .
Overall:
Device size: 1.82TiB
Device allocated: 932.51GiB
Device unallocated: 930.49GiB
Device missing: 0.00B
Used: 927.28GiB
Free (estimated): 933.86GiB (min: 468.62GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,single: Size:928.47GiB, Used:925.10GiB
/dev/mapper/btrfs1 927.47GiB
/dev/mapper/btrfs2 1.00GiB
Metadata,RAID1: Size:12.00MiB, Used:1.64MiB
/dev/mapper/btrfs1 12.00MiB
/dev/mapper/btrfs2 12.00MiB
Metadata,DUP: Size:2.00GiB, Used:1.09GiB
/dev/mapper/btrfs1 4.00GiB
System,DUP: Size:8.00MiB, Used:144.00KiB
/dev/mapper/btrfs1 16.00MiB
Unallocated:
/dev/mapper/btrfs1 1.02MiB
/dev/mapper/btrfs2 930.49GiB
> > The operation failed and
> > left the filesystem in readonly state. Is this expected?
>
> Definitely not.
>
> If your disk layout fits my assumption, then the following patchset is
> worth trying:
> https://patchwork.kernel.org/project/linux-btrfs/list/?series=297005
I'll give it a try.
Best Regards,
Michał Mirosław