Re: RAID1: system stability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 23 Jun 2015 02:52:43 AM Chris Murphy wrote:
> OK I actually don't know what the intended block layer behavior is
> when unplugging a device, if it is supposed to vanish, or change state
> somehow so that thing that depend on it can know it's "missing" or
> what. So the question here is, is this working as intended? If the
> layer Btrfs depends on isn't working as intended, then Btrfs is
> probably going to do wild and crazy things. And I don't know that the
> part of the block layer Btrfs depends on for this is the same (or
> different) as what the md driver depends on.

I disagree with that statement.  BTRFS should be expected to not do wild and 
crazy things regardless of what happens with block devices.

A BTRFS RAID-1/5/6 array should cope with a single disk failing or returning 
any manner of corrupted data and should not lose data or panic the kernel.

A BTRFS RAID-0 or single disk setup should cope with a disk giving errors by 
mounting read-only or failing all operations on the filesystem.  It should not 
affect any other filesystem or have any significant impact on the system unless 
it's the root filesystem.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux