How to un degrade RAID10.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I see I'm not the only one to flounder with this. Btrfs v0.19 on
openSuSE 11.3-RC1 (2.6.34-9-default).
I created a RAID10 across 4 partitions - worked fine. Unmounted it and
zeroed one complete partition (as in of=/dev/sda8). Remounted degraded,
looked around, data seemed safe.
Attempted to remove sda8 from the array, but got an ioctl=1. Hmmm.

Did a (normal, non-RAID) mkfs on the partition, then attempted to add it
back as per the wiki. Won't mount without the degraded option even
though the 4 partitions are accepted o.k, and a re-balance worked.
The array now reports as 5 devices with #2 (the original slot for sda8)
now vacant - dare I say "missing". It reports as "*** Some devices missng"
This seems counter intuitive to me. Suggestions ?.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux