On 08/05/2020 09:19, Andrew Pam wrote: > On 8/5/20 5:37 pm, Chris Murphy wrote: >> Are there any messages in dmesg? > > Well this is interesting: > > [129682.760759] BTRFS info (device sda): scrub: finished on devid 2 with > status: 0 > [129683.173404] BTRFS info (device sda): scrub: finished on devid 1 with > status: 0 > > But then: > > $ sudo btrfs scrub status -d /home > UUID: 85069ce9-be06-4c92-b8c1-8a0f685e43c6 > scrub device /dev/sda (id 1) status > Scrub started: Thu May 7 15:44:21 2020 > Status: interrupted > Duration: 5:40:13 > Total to scrub: 3.66TiB > Rate: 151.16MiB/s > Error summary: no errors found > scrub device /dev/sdb (id 2) status > Scrub started: Thu May 7 15:44:21 2020 > Status: interrupted > Duration: 5:40:16 > Total to scrub: 3.66TiB > Rate: 152.92MiB/s > Error summary: no errors found > > So was it really "interrupted", or did it finish normally with no errors > but btrfs-progs is reporting wrongly? I also don't know whether it has really finished successfully. If you are worried that it is somehow looping (bytes scrubbed going up but not really making progress), use: btrfs scrub status -dR /home and look at last_physical (for each disk) - it should be always increasing. Also, there have been bugs in cancel/resume in the past. There could be more bugs lurking there, particularly for multi-device filesystems. If you are going to cancel and resume, check last_physical for each device before the cancel (using 'status -dR') and after the resume and make sure they seem sensible (not gone backwards, or skipped massively forward, or started again on a device which had already finished).
