RAID1 storage server won't boot with one disk missing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good afternoon,

Earlier today, I tried to set up a storage server using btrfs but ran
into some problems. The goal was to use two disks (4.0TB each) in a
raid1 configuration.

What I did:
1. Attached a single disk to a regular PC configured to boot with UEFI.
2. Booted from a thumb drive that had been made from an Ubuntu 14.04
Server x64 installation DVD.
3. Ran the installation procedure. When it came time to partition the
disk, I chose the guided partitioning option. The partitioning scheme
it suggested was:

* A 500MB EFI System Partition.
* An ext4 root partition of nearly 4 TB in size.
* A 4GB swap partition.

4. Changed the type of the middle partition from ext4 to btrfs, but
left everything else the same.
5. Finalized the partitioning scheme, allowing changes to be written to disk.
6. Continued the installation procedure until it finished. I was able
to boot into a working server from the single disk.
7. Attached the second disk.
8. Used parted to create a GPT label on the second disk and a btrfs
partition that was the same size as the btrfs partition on the first
disk.

# parted /dev/sdb
(parted) mklabel gpt
(parted) mkpart primary btrfs #####s ##########s
(parted) quit

9. Ran "btrfs device add /dev/sdb1 /" to add the second device to the
filesystem.
10. Ran "btrfs balance start -dconvert=raid1 -mconvert=raid1 /" and
waited for it to finish. It reported that it finished successfully.
11. Rebooted the system. At this point, everything appeared to be working.
12. Shut down the system, temporarily disconnected the second disk
(/dev/sdb) from the motherboard, and powered it back up.

What I expected to happen:
I expected that the system would either start as if nothing were
wrong, or would warn me that one half of the mirror was missing and
ask if I really wanted to start the system with the root array in a
degraded state.

What actually happened:
During the boot process, a kernel message appeared indicating that the
"system array" could not be found for the root filesystem (as
identified by a UUID). It then dumped me to an initramfs prompt.
Powering down the system, reattaching the second disk, and powering it
on allowed me to boot successfully. Running "btrfs fi df /" showed
that all System data was stored as RAID1.

If I want to have a storage server where one of two drives can fail at
any time without causing much down time, am I on the right track? If
so, what should I try next to get the behavior I'm looking for?

Thanks,
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux