Re: Strange behavior when replacing device on BTRFS RAID 5 array.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 27, 2016 at 11:29 AM, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:

>
> Next is to decide to what degree you want to salvage this volume and
> keep using Btrfs raid56 despite the risks

Forgot to complete this thought. So if you get a backup, and decide
you want to fix it, I would see if you can cancel the replace using
"btrfs replace cancel <mp>" and confirm that it stops. And now is the
risky part, which is whether to try "btrfs add" and then "btrfs
remove" or remove the bad drive, reboot, and see if it'll mount with
-o degraded, and then use add and remove (in which case you'll use
'remove missing').

The first you risk Btrfs still using the flaky bad drive.

The second you risk whether a degraded mount will work, and whether
any other drive in the array has a problem while degraded (like an
unrecovery read error from a single sector).


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux