Re: [PATCH] btrfs: raid56: data corruption on a device removal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 12, 2018 at 12:25:55AM +0000, Dmitriy Gorokh wrote:
> I found that RAID5 or RAID6 filesystem might be got corrupted in the following scenario:
> 
> 1. Create 4 disks RAID6 filesystem
> 2. Preallocate 16 10Gb files
> 3. Run fio: 'fio --name=testload --directory=./ --size=10G --numjobs=16 --bs=64k --iodepth=64 --rw=randrw --verify=sha256 --time_based --runtime=3600’
> 4. After few minutes pull out two drives: 'echo 1 > /sys/block/sdc/device/delete ;  echo 1 > /sys/block/sdd/device/delete’
> 
> About 5 of 10 times the test is run, it led to silent data corruption
> of a random extent, resulting in ‘IO Error’ and ‘csum failed’ messages
> while trying to read the affected file. It usually affects only small
> portion of the files and only one underlying extent of a file. When I
> converted logical address of the damaged extent to physical address
> and dumped a stripe directly from drives, I saw specific pattern,
> always the same when the issue occurs.
> 
> I found that few bios which were being processed right during the
> drives removal, contained non zero bio->bi_iter.bi_done field despite
> of  EIO bi_status. bi_sector field was also increased from original
> one by that 'bi_done' value. Looks like this is a quite rare
> condition. Subsequently, in the raid_rmw_end_io handler that failed
> bio can be translated to a wrong stripe number and fail wrong rbio.

Thanks for the analysis, sounds correct to me and the fix too. Would be
good if you can attach the logs or portions of the dumps you've used to
understand the problem, like the pattern you mention above.



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux