Re: Exactly what is wrong with RAID5/6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



21.06.2017 09:51, Marat Khalili пишет:
> On 21/06/17 06:48, Chris Murphy wrote:
>> Another possibility is to ensure a new write is written to a new*not*
>> full stripe, i.e. dynamic stripe size. So if the modification is a 50K
>> file on a 4 disk raid5; instead of writing 3 64K data strips + 1 64K
>> parity strip (a full stripe write); write out 1 64K data strip + 1 64K
>> parity strip. In effect, a 4 disk raid5 would quickly get not just 3
>> data + 1 parity strip Btrfs block groups; but 1 data + 1 parity, and 2
>> data + 1 parity chunks, and direct those write to the proper chunk
>> based on size. Anyway that's beyond my ability to assess how much
>> allocator work that is. Balance I'd expect to rewrite everything to
>> max data strips possible; the optimization would only apply to normal
>> operation COW.
> This will make some filesystems mostly RAID1, negating all space savings
> of RAID5, won't it?
> 
> Isn't it easier to recalculate parity block based using previous state
> of two rewritten strips, parity and data? I don't understand all
> performance implications, but it might scale better with number of devices.
> 

That's what it effectively does today; the problem is, RAID[56] layer is
below btrfs allocator so same stripe may be shared by different
transactions. This defeats the very idea of redirect on write where data
on disk is assumed to never be changed by subsequent modifications.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux