Re: [PATCH 2/4] Btrfs: fix data corruption in raid6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 07, 2017 at 04:32:55PM +0800, Anand Jain wrote:
> 
> 
> On 11/02/2017 08:54 AM, Liu Bo wrote:
> >With raid6 profile, btrfs can end up with data corruption by the
> >following steps.
> >
> >Say we have a 5 disks that are set up with raid6 profile,
> >
> >1) mount this btrfs
> >2) one disk gets pulled out
> >3) write something to btrfs and sync
> >4) another disk gets pulled out
> >5) write something to btrfs and sync
> >6) umount this btrfs
> >7) bring the two disk back
> >8) reboot
> >9) mount this btrfs
> >
> >Chances are mount will fail (even with -odegraded) because of failure
> >on reading metadata blocks, IOW, our raid6 setup is not able to
> >protect two disk failures in some cases.
> >
> >So it turns out that there is a bug in raid6's recover code, that is,
> >
> >if we have one of stripes in the raid6 layout as shown here,
> >
> >| D1(*)  | D2(*)  | D3 | D4 | D5    |
> >-------------------------------------
> >| data1  | data2  | P  | Q  | data0 |
> 
> 
> >D1 and D2 are the two disks which got pulled out and brought back.
> >When mount reads data1 and finds out that data1 doesn't match its crc,
> >btrfs goes to recover data1 from other stripes, what it'll be doing is
> >
> >1) read data2, parity P, parity Q and data0
> >
> >2) as we have a valid parity P and two data stripes, it goes to
> >    recover in raid5 style.
> 
> 
> >(However, since disk D2 was pulled out and data2 on D2 could be stale,
> 
>  data2 should end up crc error, if not then raid5 recover is as
>  expected OR this example is confusing to explain the context of
>  two data stipe missing.

The assumption you have is not true, when doing reconstruction, we
don't verify checksum for each data stripe that is read from disk.

Thanks,

-liubo
> 
> Thanks, Anand
> 
> 
> >we still get the wrong data1 from this reconstruction.)
> >
> >3) btrfs continues to try to reconstruct data1 from parity Q, data2
> >    and data0, we still get the wrong one for the same reason.
> >
> >The fix here is to take advantage of the device flag, ie. 'In_sync',
> >all data on a device might be stale if 'In_sync' has not been set.
> >
> >With this, we can build the correct data2 from parity P, Q and data1.
> >
> >Signed-off-by: Liu Bo <bo.li.liu@xxxxxxxxxx>
> >---
> >  fs/btrfs/raid56.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> >diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> >index 67262f8..3c0ce61 100644
> >--- a/fs/btrfs/raid56.c
> >+++ b/fs/btrfs/raid56.c
> >@@ -1076,7 +1076,8 @@ static int rbio_add_io_page(struct btrfs_raid_bio *rbio,
> >  	disk_start = stripe->physical + (page_index << PAGE_SHIFT);
> >  	/* if the device is missing, just fail this stripe */
> >-	if (!stripe->dev->bdev)
> >+	if (!stripe->dev->bdev ||
> >+	    !test_bit(In_sync, &stripe->dev->flags))
> >  		return fail_rbio_index(rbio, stripe_nr);
> >  	/* see if we can add this page onto our existing bio */
> >
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux