Re: file system full on a single disk?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 13, 2020 at 4:29 PM Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
>
> On Mon, Jan 13, 2020 at 4:21 PM Christian Kujau <lists@xxxxxxxxxxxxxxx> wrote:
> >
> > On Mon, 13 Jan 2020, Chris Murphy wrote:
> > > It's a reporting bug. File system is fine.
> >
> > Well, I received some ENOSPC notifications from various apps, so it was a
> > real problem.
>
> Oh it's a real problem and a real bug. But the file system itself is OK.
>
> >
> > > > I'm running a --full-balance now and it's progressing, slowly. I've seen
> > > > tricks on the interwebs to temporarily add a ramdisk, run another balance,
> > > > remove the ramdisk again - but that seems hackish.
> > >
> > > I'd stop the balance. Balancing metadata in particular appears to make
> > > the problem more common. And you're right, it's hackish, it's not a
> > > great work around for anything these days, and if it is, good chance
> > > it's a bug.
> >
> > For now, the balancing "helped", but the fs still shows only 391 GB
> > allocated from the 924 GB device:
> >
> > =======================================================================
> > # btrfs filesystem show /
> > Label: 'root'  uuid: 75a6d93a-5a5c-48e0-a237-007b2e812477
> >         Total devices 1 FS bytes used 388.00GiB
> >         devid    1 size 824.40GiB used 391.03GiB path /dev/mapper/luks-root
> >
> > # df -h /
> > Filesystem             Size  Used Avail Use% Mounted on
> > /dev/mapper/luks-root  825G  390G  433G  48% /
> > =======================================================================
> >
> > > In theory it should be enough to unmount then remount the file system;
> > > of course for sysroot that'd be a reboot.
> >
> > OK, I'll try a reboot next time.
> >
> > > There may be certain workloads that encourage it, that could be worked
> > > around temporarily using mount option metadata_ratio=1.
> >
> > I'll do that after it happens again, to see if this was a one-off or
> > happens regularily. The file system is rather new (created Dec 14) and
> > apart from spinning up some libvirt VMs (but no snapshots involved) the
> > workload is a mix of web browsing and compiling things, no nothing too
> > fancy.
>
> A less janky option is to use 5.3.18, or grab 5.5.0-rc6 from koji.
> I've been using 5.5.0 for a while for other reasons (i915 gotchas),
> and the one Btrfs bug I ran into related to compression has been fixed
> as of rc5.
>
> https://koji.fedoraproject.org/koji/buildinfo?buildID=1428886
>

This is the latest patchset as of about a week ago, and actually I'm
not seeing it in 5.5rc6. A tested fix may not be ready yet.
https://patchwork.kernel.org/project/linux-btrfs/list/?series=223921

Your best bet is likely to stick with 5.4.10 and just use mount option
metadata_ratio=1. This won't cause some other weird thing to happen.
It'll just ask Btrfs to allocate a metadata block group each time a
data block group is created, or approximately 256M metadata BG for
each 1G data BG. And also it's useful to know if that doesn't help. I
myself haven't run into this bug or I'd try it.


-- 
Chris Murphy



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux