On 7/5/20 9:42 am, Chris Murphy wrote:
> A raid1 volume has twice as many bytes to scrub as data reported by
> df.
It's scrubbed more than twice as many bytes, though.
> Can you tell us what kernel version?
5.4.0
> And also what you get for:
> $ sudo btrfs fi us /mp/
Overall:
Device size: 10.92TiB
Device allocated: 7.32TiB
Device unallocated: 3.59TiB
Device missing: 0.00B
Used: 7.31TiB
Free (estimated): 1.80TiB (min: 1.80TiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID1: Size:3.65TiB, Used:3.65TiB (99.89%)
/dev/sda 3.65TiB
/dev/sdb 3.65TiB
Metadata,RAID1: Size:8.00GiB, Used:6.54GiB (81.74%)
/dev/sda 8.00GiB
/dev/sdb 8.00GiB
System,RAID1: Size:64.00MiB, Used:544.00KiB (0.83%)
/dev/sda 64.00MiB
/dev/sdb 64.00MiB
Unallocated:
/dev/sda 1.80TiB
/dev/sdb 1.80TiB
> $ df -h
/dev/sda 5.5T 3.7T 1.9T 67% /home
> I think what you're seeing is a bug. Most of the size reporting in
> btrfs commands is in btrfs-progs; whereas the scrub is initiated by
> user space, most of the work is done by the kernel. But I don't know
> where the tracking code is.
No kidding. What concerns me now is that the scrub shows no signs of
ever stopping:
$ sudo btrfs scrub status -d /home
UUID: 85069ce9-be06-4c92-b8c1-8a0f685e43c6
scrub device /dev/sda (id 1) status
Scrub started: Mon May 4 04:36:54 2020
Status: running
Duration: 18:06:28
Time left: 31009959:50:08
ETA: Fri Dec 13 03:58:24 5557
Total to scrub: 3.66TiB
Bytes scrubbed: 9.80TiB
Rate: 157.58MiB/s
Error summary: no errors found
scrub device /dev/sdb (id 2) status
no stats available
Time left: 30892482:15:09
ETA: Wed Jul 19 05:23:25 5544
Total to scrub: 3.66TiB
Bytes scrubbed: 8.86TiB
Rate: 158.18MiB/s
Error summary: no errors found
Cheers,
Andrew