On Wed, May 6, 2020 at 11:10 PM Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
>
> On Wed, May 6, 2020 at 7:11 PM Andrew Pam <andrew@xxxxxxxxxxxxxx> wrote:
> >
> > > $ sudo btrfs fi us /mp/
> >
> > Overall:
> > Device size: 10.92TiB
> > Device allocated: 7.32TiB
> > Device unallocated: 3.59TiB
> > Device missing: 0.00B
> > Used: 7.31TiB
>
>
> Bytes to scrub should be 7.31TB...
>
>
> > $ sudo btrfs scrub status -d /home
> > UUID: 85069ce9-be06-4c92-b8c1-8a0f685e43c6
> > scrub device /dev/sda (id 1) status
> > Scrub started: Mon May 4 04:36:54 2020
> > Status: running
> > Duration: 18:06:28
> > Time left: 31009959:50:08
> > ETA: Fri Dec 13 03:58:24 5557
> > Total to scrub: 3.66TiB
> > Bytes scrubbed: 9.80TiB
>
>
> So two bugs. Total to scrub is wrong. And scrubbed bytes is bigger
> than both the reported total to scrub and the correct total that
> should be scrubbed.
>
> Three bugs, the time is goofy. This sounds familiar. Maybe just
> upgrade your btrfs-progs.
This was fixed in 5.2.1. I'm not sure why you're seeing this.
commit 96ed8e801fa2fc2d8a99e757566293c05572ebe1
Author: Grzegorz Kowal <grzegorz@xxxxxxxxxxxx>
Date: Sun Jul 7 14:58:56 2019 -0300
btrfs-progs: scrub: fix ETA calculation
What I would do is cancel the scrub. And then delete the applicable
file in /var/lib/btrfs, which is the file that keeps track of the
scrub. Then do 'btrfs scrub status' on that file system and it should
say there are no stats, but it'd be interesting to know if Total to
Scrub is sane. You can also start another scrub, and then again check
status and see if it's still sane or not. If not I'd cancel it and
keep troubleshooting.
I was recently on btrfs-progs 5.4.1 and didn't see this behavior
myself on a raid1 volume.
--
Chris Murphy