On giovedì 17 novembre 2016 21:20:56 CET, Austin S. Hemmelgarn wrote:
On 2016-11-17 15:05, Chris Murphy wrote:
I think the wiki should be updated to reflect that raid1 and raid10
are mostly OK. I think it's grossly misleading to consider either as
green/OK when a single degraded read write mount creates single chunks
that will then prevent a subsequent degraded read write mount. And
also the lack of various notifications of device faultiness I think
make it less than OK also. It's not in the "do not use" category but
it should be in the middle ground status so users can make informed
decisions.
It's worth pointing out also regarding this:
* This is handled sanely in recent kernels (the check got
changed from per-fs to per-chunk, so you still have a usable FS
if all the single chunks are only on devices you still have).
* This is only an issue with filesystems with exactly two
disks. If a 3+ disk raid1 FS goes degraded, you still generate
raid1 chunks.
* There are a couple of other cases where raid1 mode falls flat
on it's face (lots of I/O errors in a short span of time with
compression enabled can cause a kernel panic for example).
* raid10 has some other issues of it's own (you lose two
devices, your filesystem is dead, which shouldn't be the case
100% of the time (if you lose different parts of each mirror,
BTRFS _should_ be able to recover, it just doesn't do so right
now)).
As far as the failed device handling issues, those are a
problem with BTRFS in general, not just raid1 and raid10, so I
wouldn't count those against raid1 and raid10.
Everything you mentioned should be in the wiki IMHO. Knowledge is power.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html