Re: btrfs-progs reports nonsense scrub status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 5, 2020 at 2:39 PM Andrew Pam <andrew@xxxxxxxxxxxxxx> wrote:
>
> On 5/5/20 7:51 pm, Graham Cobb wrote:
> > Is there actually a scrub in progress?
>
> The scrub has been going for a couple of days now, and has scrubbed
> considerably more data than exists on the disks.  Will it ever finish?

A raid1 volume has twice as many bytes to scrub as data reported by
df. Can you tell us what kernel version? And also what you get for:
$ sudo btrfs fi us /mp/
$ df -h

I'm using progs 5.6 and kernel 5.6.8 for this:

$ sudo btrfs scrub status /mnt/third
UUID:
Scrub resumed:    Tue May  5 08:45:41 2020
Status:           finished
Duration:         2:41:12
Total to scrub:   759.43GiB
Rate:             79.57MiB/s
Error summary:    no errors found
$ sudo btrfs fi us /mnt/third
Overall:
    Device size:         931.49GiB
    Device allocated:         762.02GiB
    Device unallocated:         169.48GiB
    Device missing:             0.00B
    Used:             759.43GiB
    Free (estimated):          85.17GiB    (min: 85.17GiB)
    Data ratio:                  2.00
    Metadata ratio:              2.00
    Global reserve:         512.00MiB    (used: 0.00B)

Data,RAID1: Size:379.00GiB, Used:378.56GiB (99.89%)
   /dev/mapper/sdd     379.00GiB
   /dev/mapper/sdc     379.00GiB

Metadata,RAID1: Size:2.00GiB, Used:1.15GiB (57.57%)
   /dev/mapper/sdd       2.00GiB
   /dev/mapper/sdc       2.00GiB

System,RAID1: Size:8.00MiB, Used:80.00KiB (0.98%)
   /dev/mapper/sdd       8.00MiB
   /dev/mapper/sdc       8.00MiB

Unallocated:
   /dev/mapper/sdd      84.74GiB
   /dev/mapper/sdc      84.74GiB

$ df -h
...
/dev/mapper/sdd    466G  381G   86G  82% /mnt/third

I think what you're seeing is a bug. Most of the size reporting in
btrfs commands is in btrfs-progs; whereas the scrub is initiated by
user space, most of the work is done by the kernel. But I don't know
where the tracking code is.

Some of the sizes you have to infer perspective. There is no one
correct perspective. So it's normal to get a bit confused about the
convention that applies. On mdadm/lvm raid1, the mirror isn't included
in any of the space reporting. It seems like it's reporting 1/2 the
storage. Meanwhile Btrfs reports all of the storage, and variably
shows data taking up twice as much space (as literally behind the
scenes each block group of extents has a mirror)


--
Chris Murphy



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux