On Sat, Feb 14, 2015 at 11:28 PM, Duncan <1i5t5.duncan@xxxxxxx> wrote: > Chris Murphy posted on Sat, 14 Feb 2015 04:52:12 -0700 as excerpted: >> Also, there's a nasty >> little gotcha, there is no equivalent for mdadm bitmap. So once one >> member drive is mounted degraded+rw, it's changed, and there's no way to >> "catch up" the other drive - if you reconnect, it might seem things are >> OK but there's a good chance of corruption in such a case. You have to >> make sure you wipe the "lost" drive (the older version one). wipefs -a >> should be sufficient, then use 'device add' and 'device delete missing' >> to rebuild it. > > I caught this in my initial btrfs experimentation, before I set it up > permanently. It's worth repeating for emphasis, with a bit more > information as well. > > *** If you break up a btrfs raid1 and attempt to recombine afterward, be > *SURE* you *ONLY* mount the one side writable after that. As long as > ONLY one side is written to, that one side will consistently have a later > generation than the device that was dropped out, and you can add the > dropped device back in, Right. I left out the distinguishing factor in whether or not it corrupts. I'm uncertain how bad this corruption is, I've never tried reproducing it. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
