On Thu, Feb 13, 2014 at 11:13:58AM -0500, Jim Salter wrote: > This might be a stupid question but... > > Are there any plans to make parity RAID levels in btrfs similar to > the current implementation of btrfs-raid1? Yes. > It took me a while to realize how different and powerful btrfs-raid1 > is from traditional raid1. The ability to string together virtually > any combination of "mutt" hard drives together in arbitrary ways and > yet maintain redundancy is POWERFUL, and is seriously going to be a > killer feature advancing btrfs adoption in small environments. > > The one real drawback to btrfs-raid1 is that you're committed to n/2 > storage efficiency, since you're using pure redundancy rather than > parity on the array. I was thinking about that this morning, and > suddenly it occurred to me that you ought to be able to create a > striped parity array in much the same way as a btrfs-raid1 array. > > Let's say you have five disks, and you arbitrarily want to define a > stripe length of four data blocks plus one parity block per > "stripe". Right now, what you're looking at effectively amounts to > a RAID3 array, like FreeBSD used to use. But, what if we add two > more disks? Or three more disks? Or ten more? Is there any reason > we can't keep our stripe length of four blocks + one parity block, > and just distribute them relatively ad-hoc in the same way > btrfs-raid1 distributes redundant data blocks across an ad-hoc array > of disks? None whatsoever. > This could be a pretty powerful setup IMO - if you implemented > something like this, you'd be able to arbitrarily define your > storage efficiency (percentage of parity blocks / data blocks) and > your fault-tolerance level (how many drives you can afford to lose > before failure) WITHOUT tying it directly to your underlying disks, > or necessarily needing to rebalance as you add more disks to the > array. This would be a heck of a lot more flexible than ZFS' > approach of adding more immutable vdevs. > > Please feel free to tell me why I'm dumb for either 1. not realizing > the obvious flaw in this idea or 2. not realizing it's already being > worked on in exactly this fashion. =) The latter. :) One of the (many) existing problems with the parity RAID implementation as it is is that with large numbers of devices, it becomes quite inefficient to write data when you (may) need to modify dozens of devices. It's been Chris's stated intention for a while now to allow a bound to be placed on the maximum number of devices per stripe, which allows a degree of control over the space-yield <-> performance knob. It's one of the reasons that the usage tool [1] has a "maximum stripes" knob on it -- so that you can see the behaviour of the FS once that feature's in place. Hugo. [1] http://carfax.org.uk/btrfs-usage/ -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk === PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- Nothing right in my left brain. Nothing left in --- my right brain.
Attachment:
signature.asc
Description: Digital signature
