On Sep 30, 2013, at 10:43 PM, Duncan <1i5t5.duncan@xxxxxxx> wrote: > > Meanwhile, I really do have to question the use case where the risks of a > single dead device killing a raid0 (or for that matter, running still > experimental btrfs) are fine, but spending days doing data maintenance on > data not valuable enough to put on anything but experimental btrfs raid0, > is warranted over simply blowing the data away and starting with brand > new mkfs-ed filesystems. Yes of course. It must be a test case, and I think for non-experimental stable Btrfs it's reasonable to have reliable device delete regardless of the raid level because it's offered. And after all maybe the use case involves enterprise SSDs, each of which should have a less than 1% chance of failing during their service life. (Naturally, that's going to go a lot faster than days.) > That a strong hint to me that either the raid0 > use case is wrong, or the days of data move and reshape instead of > blowing it away and recreating brand new filesystems, is wrong, and that > one or the other should be reevaluated. I think it's the wrong use case today, except for testing it. It's legit to try and blow things up, simply because it's offered functionality, so long as the idea is "I really would like to have this workflow actually work in 2-5 years". Otherwise it is sortof a rat hole. The other thing, clearly the OP is surprised it's taking anywhere near this long. Had he known in advance, he probably would have made a different choice. Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
