Re: RAID1 storage server won't boot with one disk missing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 17, 2015 at 9:18 AM, Anand Jain <anand.jain@xxxxxxxxxx> wrote:
>
>  as of now it would/should start normally only when there is an entry -o
> degraded
>
>  it looks like -o degraded is going to be a very obvious feature,
>  I have plans of making it a default feature, and provide -o
>  nodegraded feature instead. Thanks for comments if any.


If degraded mounts happen by default, what happens when dev 1 goes
missing temporarily and dev 2 is mounted degraded,rw and then dev 1
reappears? Is there an automatic way to a.) catch up dev 1 with dev 2?
and then b.) automatically make the array no longer degraded?

I think it's a problem to have automatic degraded mounts when there's
no monitoring or notification system of problems. We can get silent
degraded mounts by default with no notification at all there's a
problem with a Btrfs volume.

So off hand my comment is that I think other work is needed before
degraded mounts is default behavior.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux