On Sat, Aug 29, 2015 at 2:52 AM, George Duffield <forumscollective@xxxxxxxxx> wrote: > Funny you should say that, whilst I'd read about it it didn't concern > me much until Neil Brown himself advised me against expanding the > raid5 arrays any further (one was built using 3TB drives and the other > using 4TB drives). My understanding is that larger arrays are > typically built using more drives of lower capacity. I'm also loathe > to use mdadm as expanding arrays takes forever whereas a Btrfs array > should expand much quicker. If Btrfs raid isn't yet ready for prime > time I'll just hold off doing anything for the moment, frustrating as > that is. I think a grid of mdadm vs btrfs feature/behavior comparisons might be useful. The main thing to be aware of with btrfs multiple device is the failure handling is really not present; whereas it is with mdadm and lvm raids. This means btrfs tolerates read and write failures where md will "eject" the drive from the array after even one write failure, and after so many read failures (not sure what it is). There's also no spares support. And no notifications of problems, just kernel messages. Instead of notification emails the mdadm way, I think it's better to look at maybe libblockdev and storaged projects since both of those are taking on standardizing the manipulation of mdadm arrays, LVM, LUKS, and other Linux storage technologies. And then project like (but no limited to) openLMI and a future udisks2 replacement can then get information and state on such things, and propagate that up to the user (with email, text message, web browser, whatever). -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
