Re: experiment: suboptimal behaviour with write errors and multi-device filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-05-02 2:23 p.m., Marc Lehmann wrote:

> 
> That's interesting - last time I used pvmove on a source with read errors,
> it didn't move that (that was a hwile ago, most of my volumes nowadays are
> raid5'ed and don't suffer from read errors).
> 
> More importantly, however, if your source drive fails, pvmove will *not*
> end up with skipping all the rest of the transfer and finish successfully
> (as btrfs did in the case we discuss), resulting in very massive data
> loss, simply because it cannot commit the new state.
> 
> No matter what other tool you look at, none behave as btrfs does
> currently. Actual behaviour difers widely in detail, of course, but I
> can't come up with a situation where a removed disk will result in upper
> layers continuing to use it as if it were there.
> 

I agree with the core of what you said, but I also think you're
overcomplicating it a bit.  If BTRFS is unable to write a single copy of
data, it should go R/O. (god knows, it has enough triggers to go R/O on
it's own already,, it seems odd that being unable to write data is not
included.)

A more strict mode of raid error could be used to go R/O if any of the
writes fail, (rather than btrfs continuing in a degraded mode
indefinately until reboot,), but that would be something that could be a
mount option.


Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux