On 7/5/20 3:36 pm, Chris Murphy wrote: > This was fixed in 5.2.1. I'm not sure why you're seeing this. > > commit 96ed8e801fa2fc2d8a99e757566293c05572ebe1 > Author: Grzegorz Kowal <grzegorz@xxxxxxxxxxxx> > Date: Sun Jul 7 14:58:56 2019 -0300 > > btrfs-progs: scrub: fix ETA calculation Maybe not fixed under all conditions! :) > What I would do is cancel the scrub. And then delete the applicable > file in /var/lib/btrfs, which is the file that keeps track of the > scrub. Then do 'btrfs scrub status' on that file system and it should > say there are no stats, but it'd be interesting to know if Total to > Scrub is sane. $ sudo btrfs scrub status /home UUID: 85069ce9-be06-4c92-b8c1-8a0f685e43c6 no stats available Total to scrub: 7.31TiB Rate: 0.00B/s Error summary: no errors found > You can also start another scrub, and then again check > status and see if it's still sane or not. If not I'd cancel it and > keep troubleshooting. $ sudo btrfs scrub status -d /home UUID: 85069ce9-be06-4c92-b8c1-8a0f685e43c6 scrub device /dev/sda (id 1) status Scrub started: Thu May 7 15:44:21 2020 Status: running Duration: 0:06:53 Time left: 9:23:26 ETA: Fri May 8 01:14:40 2020 Total to scrub: 3.66TiB Bytes scrubbed: 45.24GiB Rate: 112.16MiB/s Error summary: no errors found scrub device /dev/sdb (id 2) status Scrub started: Thu May 7 15:44:21 2020 Status: running Duration: 0:06:53 Time left: 9:24:50 ETA: Fri May 8 01:16:04 2020 Total to scrub: 3.66TiB Bytes scrubbed: 45.12GiB Rate: 111.88MiB/s Error summary: no errors found Still sane after cancelling and resuming. One thing that might be relevant: On the original scrub, I started it on the mountpoint but initially cancelled and resumed it on the device /dev/sda rather than the mountpoint. Could that trigger a bug? Cheers, Andrew
