Question: raid1 behaviour on failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a raid1 with 3 drives: 698, 465 and 232 GB. I copied 1,7 GB data to that raid1, balanced the filesystem and then removed the bigger drive (hotplug). 

The data was still available. Now I copied the /root directory to the raid1. It showed up via ls -l. Then I plugged in the missing hard drive again (hotplug). After a few seconds "btrfs fi show" is giving output as usual:

Label: none  uuid: 16d5891f-5d52-4b29-8591-588ddf11e73d
	Total devices 3 FS bytes used 1.60GiB
	devid    1 size 698.64GiB used 4.03GiB path /dev/sdg
	devid    2 size 465.76GiB used 4.03GiB path /dev/sdh
	devid    3 size 232.88GiB used 0.00B path /dev/sdi

The /root is still showing up, but the raid1 is now mounted in *read-only* mode. 

I umounted it and mounted it again. Now the /root directory on the raid1 is no longer available. Its gone.

I guess I missed some important step to recover the degraded raid1 before umounting it.

What is it that I missed?

Matthias

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux