On 2016-06-25 12:44, Chris Murphy wrote:
On Fri, Jun 24, 2016 at 12:19 PM, Austin S. Hemmelgarn
<ahferroin7@xxxxxxxxx> wrote:
Well, the obvious major advantage that comes to mind for me to checksumming
parity is that it would let us scrub the parity data itself and verify it.
OK but hold on. During scrub, it should read data, compute checksums
*and* parity, and compare those to what's on-disk - > EXTENT_CSUM in
the checksum tree, and the parity strip in the chunk tree. And if
parity is wrong, then it should be replaced.
Except that's horribly inefficient. With limited exceptions involving
highly situational co-processors, computing a checksum of a parity block
is always going to be faster than computing parity for the stripe. By
using that to check parity, we can safely speed up the common case of
near zero errors during a scrub by a pretty significant factor.
The ideal situation that I'd like to see for scrub WRT parity is:
1. Store checksums for the parity itself.
2. During scrub, if the checksum is good, the parity is good, and we
just saved the time of computing the whole parity block.
3. If the checksum is not good, then compute the parity. If the parity
just computed matches what is there already, the checksum is bad and
should be rewritten (and we should probably recompute the whole block of
checksums it's in), otherwise, the parity was bad, write out the new
parity and update the checksum.
4. Have an option to skip the csum check on the parity and always
compute it.
Even check > md/sync_action does this. So no pun intended but Btrfs
isn't even at parity with mdadm on data integrity if it doesn't check
if the parity matches data.
Except that MD and LVM don't have checksums to verify anything outside
of the very high-level metadata. They have to compute the parity during
a scrub because that's the _only_ way they have to check data integrity.
Just because that's the only way for them to check it does not mean we
have to follow their design, especially considering that we have other,
faster ways to check it.
I'd personally much rather know my parity is bad before I need to use it
than after using it to reconstruct data and getting an error there, and I'd
be willing to be that most seasoned sysadmins working for companies using
big storage arrays likely feel the same about it.
That doesn't require parity csums though. It just requires computing
parity during a scrub and comparing it to the parity on disk to make
sure they're the same. If they aren't, assuming no other error for
that full stripe read, then the parity block is replaced.
It does not require it, but it can make it significantly more efficient,
and even a 1% increase in efficiency is a huge difference on a big array.
So that's also something to check in the code or poke a system with a
stick and see what happens.
I could see it being
practical to have an option to turn this off for performance reasons or
similar, but again, I have a feeling that most people would rather be able
to check if a rebuild will eat data before trying to rebuild (depending on
the situation in such a case, it will sometimes just make more sense to nuke
the array and restore from a backup instead of spending time waiting for it
to rebuild).
The much bigger problem we have right now that affects Btrfs,
LVM/mdadm md raid, is this silly bad default with non-enterprise
drives having no configurable SCT ERC, with ensuing long recovery
times, and the kernel SCSI command timer at 30 seconds - which
actually also fucks over regular single disk users also because it
means they don't get the "benefit" of long recovery times, which is
the whole g'd point of that feature. This itself causes so many
problems where bad sectors just get worse and don't get fixed up
because of all the link resets. So I still think it's a bullshit
default kernel side because it pretty much affects the majority use
case, it is only a non-problem with proprietary hardware raid, and
software raid using enterprise (or NAS specific) drives that already
have short recovery times by default.
On this, we can agree.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html