Re: /bin/df showing btrfs filesystem full

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2020/1/6 下午3:21, Kenneth Topp wrote:
> 
> 
> Hi.
> 
> I have an issue were periodically a btrfs filesystem shows 100% utilized
> 0% available
> # df -h /home
> Filesystem           Size  Used Avail Use% Mounted on
> /dev/mapper/cprt-50   30T   20T     0 100% /home
> 
> 
> Then it goes back to normal as follows:
> 
> # df -h /home
> Filesystem           Size  Used Avail Use% Mounted on
> /dev/mapper/cprt-50   30T   20T  9.7T  67% /home

A known bug if the metadata reservation reaches a threshold, only for
v5.4 kernel.

The latest patchset trying to address this bug is here:
https://patchwork.kernel.org/project/linux-btrfs/list/?series=223921

> 
> 
> This filesystem was created on this kernel 5.4.6 that it's currently
> running.  This filesystem went from 0tb used to 20tb used and would show
> this problem periodically as i was filling up the drive.   There was no
> ENOSPACE issues, so I thought it was just related to the heavy writing,
> but now that the system is in regular service, it's still periodically
> "filling up".  but again, the only symptom I can see is gnome and df
> showing the drive being full.  nothing else indicates that the drive is
> full.
> 
> I have some other btrfs filesystems that didn't show any issues.  They
> were created under earlier kernels, but with the same options.  the
> other difference is this new filesystem is on top 4kn drives, where the
> others are all 512e.
> 
> Any advice would be welcome, for now I'm just ignoring the problem, and
> making sure my backups are good.

Only statfs() call is affected, so your data is fine.

But quite some programs would use statfs() to determine if the fs is
full, so there would be quite some inconvenience, but should be no data
corruption.

Thanks,
Qu

> 
> 
> filesystem creation commands:
> 
> mkfs.btrfs -f  -O no-holes -d single -m raid1 -L tm /dev/mapper/cprt-50
> /dev/mapper/cprt-53
> 
> first time mounted was with this:
> mount -o clear_cache,space_cache=v2 LABEL=tm /mnt
> 
> diagnostics commands:
> 
> 
> 
> #   uname -a
> Linux static.bllue.org 5.4.6-301.fc31.x86_64 #1 SMP Tue Dec 24 15:09:19
> EST 2019 x86_64 x86_64 x86_64 GNU/Linux
> #   btrfs --version
> btrfs-progs v5.4
> #   btrfs fi show
> Label: 't2'  uuid: ce50d21c-7727-4a53-b804-d02480643dfa
>         Total devices 2 FS bytes used 640.00KiB
>         devid    1 size 447.13GiB used 2.01GiB path /dev/mapper/cprt-30
>         devid    2 size 447.13GiB used 2.01GiB path /dev/mapper/cprt-31
> 
> Label: 'btm'  uuid: 0a5b42a7-0e39-48fa-be1f-4aa29bc323f2
>         Total devices 2 FS bytes used 19.45TiB
>         devid    1 size 14.55TiB used 9.75TiB path /dev/mapper/cprt-50
>         devid    2 size 14.55TiB used 9.75TiB path /dev/mapper/cprt-53
> 
> 
> #   btrfs fi df /home # Replace /home with the mount point of your
> btrfs-filesystem
> Data, single: total=19.45TiB, used=19.43TiB
> System, RAID1: total=32.00MiB, used=2.05MiB
> Metadata, RAID1: total=26.00GiB, used=25.73GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux