Re: raid0 and different sized devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jul 11, 2020 at 12:28 AM Andrei Borzenkov <arvidjaar@xxxxxxxxx> wrote:
>
> 11.07.2020 04:37, Chris Murphy пишет:
> > Summary:
> >
> > df claims this volume is full, which is how it actually behaves. Rsync
> > fails with an out of space message. But 'btrfs fi us' reports
> > seemingly misleading/incorrect information:
> >
> >     Free (estimated):          12.64GiB    (min: 6.33GiB)
> >
> > If Btrfs can't do single device raid0, and it seems it can't, then
> > this free space reporting seems wrong twice. (Both values.)
> >
>
> This space can be used with single or dup profiles so it is actually
> correct (second number being for dup). It would of course be nice to get
> extended output "how much space for each profile", but as "estimation of
> theoretical free space" it is absolutely correct.
>
> Of course I do not know whether it is correct by design or by coincidence :)

I also wonder about the effect of metadata raid1 in this case. For
sure it can't allocate another raid1 chunk, even if raid0 profile were
able to support single stripe raid0 block groups on a single device.

And in fact we end up with a weird situation where premature out of
space can still happen with -d single -m raid1. What should be true is
data block groups are only created on the large device, in effect
making the small device metadata only. But since in this extreme
example it's only ~700M, it's just a matter of time before we're in
metadata exhaustion because there's no fall back to single metadata.

This is a peculiar case in a VM, where it's easy to create such
(somewhat) contrived scenarios. In the real world we're not likely to
see these kinds of problems, I think. A 2G device is rare indeed, let
alone matched with a ~16 GiB device. We could also argue the UI/UX of
an installer allowing multiple device selection for automatic
partitioning. But alas it is allowed, and currently defaults to
LVM+ext4. So it becomes a concat/linear arrangement and just works, no
matter how possibly fragile it is to device failure. So the closest
approximation for Btrfs would be mkfs.btrfs -d single -m single. The
next closest is -d single -m raid1, which at least allows the
possibility of salvaging the data on the surviving drive, and probably
worth the risk of metadata exhaustion in the rare case of including a
small second device in the pool.

-- 
Chris Murphy




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux