On Tuesday, February 10, 2015 2:17:43 AM EST, Kai Krakow wrote:
Tobias Holst <tobby@xxxxxxxx> schrieb:
and "btrfs scrub status /[device]" gives me the following output:
"scrub status for [UUID]
scrub started at Mon Feb 9 18:16:38 2015 and was aborted after 2008
seconds total bytes scrubbed: 113.04GiB with 0 errors"
Does not look very correct to me:
Why should a scrub in a six-drivers btrfs array which is probably multi-
terabytes big (as you state a restore from backup would take
days) take only
~2000 seconds? And scrub only ~120 GB worth of data. Either your 6 devices
are really small (then why RAID-6), or your data is very sparse (then way
does it take so long), or scrub prematurely aborts and never checks the
complete devices (I guess this is it).
And that's what it actually says: "aborted after 2008" seconds. I'd expect
"finished after XXXX seconds" if I remember my scrub runs
correctly (which I
currently don't do regularly because it takes long and IO performance sucks
during running it).
IO perfermance does suffer during a scrub. I use the following:
ionice -c 3 btrfs scrub start -Bd -n 19 /<target>
The combo of -n19 and ionice makes it workable here.
Tobias why do you think btrfsck does not work on raid6? It runs fine
here on raid5.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html