Re: btrfs as / filesystem in RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2/7/19 7:04 PM, Stefan K wrote:
Thanks, with degraded  as kernel parameter and also ind the fstab it works like expected

That should be the normal behaviour,

IMO in the long term it will be. But before that we have few items to fix around this, such as the serviceability part.

-Anand


cause a server must be up and running, and I don't care about a device loss, thats why I use a RAID1. The device-loss problem can I fix later, but its important that a server is up and running, i got informed at boot time and also in the logs files that a device is missing, also I see that if you use a monitoring program.

So please change the normal behavior

On Friday, February 1, 2019 7:13:16 PM CET Hans van Kranenburg wrote:
Hi Stefan,

On 2/1/19 11:28 AM, Stefan K wrote:

I've installed my Debian Stretch to have / on btrfs with raid1 on 2
SSDs. Today I want test if it works, it works fine until the server
is running and the SSD get broken and I can change this, but it looks
like that it does not work if the SSD fails until restart. I got the
error, that one of the Disks can't be read and I got a initramfs
prompt, I expected that it still runs like mdraid and said something
is missing.

My question is, is it possible to configure btrfs/fstab/grub that it
still boot? (that is what I expected from a RAID1)

Yes. I'm not the expert in this area, but I see you haven't got a reply
today yet, so I'll try.

What you see happening is correct. This is the default behavior.

To be able to boot into your system with a missing disk, you can add...
     rootflags=degraded
...to the linux kernel command line by editing it on the fly when you
are in the GRUB menu.

This allows the filesystem to start in 'degraded' mode this one time.
The only thing you should be doing when the system is booted is have a
new disk present already in place and fix the btrfs situation. This
means things like cloning the partition table of the disk that's still
working, doing whatever else is needed in your situation and then
running btrfs replace to replace the missing disk with the new one, and
then making sure you don't have "single" block groups left (using btrfs
balance), which might have been created for new writes when the
filesystem was running in degraded mode.

--
Hans van Kranenburg





[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux