Re: btrfs freezing on writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Apr 11, 2020 at 09:46:43PM +0200, kjansen387 wrote:
> I have tried to rebalance metadata..
> 
> Starting point:
> # btrfs fi usage /storage
> Overall:
>     Device size:                  10.92TiB
>     Device allocated:              7.45TiB
>     Device unallocated:            3.47TiB
>     Device missing:                  0.00B
>     Used:                          7.35TiB
>     Free (estimated):              1.78TiB      (min: 1.78TiB)
>     Data ratio:                       2.00
>     Metadata ratio:                   2.00
>     Global reserve:              512.00MiB      (used: 0.00B)
> 
> Data,RAID1: Size:3.72TiB, Used:3.67TiB (98.74%)
>    /dev/sdc        2.81TiB
>    /dev/sdb        2.81TiB
>    /dev/sda     1017.00GiB
>    /dev/sdd      840.00GiB
> 
> Metadata,RAID1: Size:6.00GiB, Used:5.09GiB (84.86%)
>    /dev/sdc        3.00GiB
>    /dev/sdb        3.00GiB
>    /dev/sda        1.00GiB
>    /dev/sdd        5.00GiB
> 
> System,RAID1: Size:32.00MiB, Used:608.00KiB (1.86%)
>    /dev/sdb       32.00MiB
>    /dev/sdd       32.00MiB
> 
> Unallocated:
>    /dev/sdc      845.02GiB
>    /dev/sdb      845.99GiB
>    /dev/sda      845.02GiB
>    /dev/sdd     1017.99GiB
>
> I did:
> # btrfs fi resize 4:-2g /storage/
> # btrfs balance start -mdevid=4 /storage
> # btrfs fi resize 4:max /storage/
> 
> but the distribution of metadata ended up like before.
> 
> I also tried (to match the free space of the other disks):
> # btrfs fi resize 4:-172g /storage/
> # btrfs balance start -mdevid=4 /storage
> # btrfs fi resize 4:max /storage/
> 
> again, the distribution of metadata ended up like before..
> 
> Any other tips to rebalance metadata ?

The purpose of resize -2g was to make a little less unallocated space
on one drive compared to all the others, starting with all the drives
having equal unallocated space.  The purpose of resize -172g was to
make the extra unallocated space on sdd go away, so it would be equal to
the other 3 drives.  You have to do _both_ of those before the balance.
Or just add the two numbers, i.e. resize -174g.

If you really want to be sure, resize by -200g (far more than necessary),
then balance start -mlimit=4,devid=4.  The balance is "I know there are
exactly 5 block groups now, and I want to leave exactly one behind,"
and the resize is "I want no possibility of new block groups on sdd for
some time."

> On 10-Apr-20 01:07, Zygo Blaxell wrote:
> > On Thu, Apr 09, 2020 at 11:53:00PM +0200, kjansen387 wrote:
> > > btrfs fi resize 1:-1g /export;           # Assuming 4GB metadata
> > > btrfs fi resize 2:-2g /export;           # Assuming 5GB metadata
> > 
> > Based on current data, yes; however, it's possible that the device remove
> > you are already running might balance the metadata as a side-effect.
> > Redo the math with the values you get after the device remove is done.
> > You may not need to balance anything.
> > 
> > > btrfs balance start -mdevid=1 /export;   # Why only devid 1, and not 2 ?
> > 
> > We want balance to relocate metadata block groups that are on both
> > devids 1 and 2, i.e. the BG has a chunk on both drives at the same time.
> > Balance filters only allow one devid to be specified, but in this case
> > 'devid=1' or 'devid=2' is close enough.  All we want to do here is filter
> > out block groups where one mirror chunk is already on devid 3, 4, or 5,
> > since that would just place the metadata somewhere else on the same disks.
> > 
> > > btrfs fi resize 1:max /export;
> > > btrfs fi resize 2:max /export;
> > > 
> > > Thanks!
> > > 
> > > 
> 



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux