Re: Replacing a (or two?) failed drive(s) in RAID-1 btrfs filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Feb 8, 2015 at 4:53 PM, constantine <costas.magnuse@xxxxxxxxx> wrote:
> I understood my mistake on using consumer drives and this is why I
> bought the RED versions some days ago. I would have done this earlier
> if I had the money.

You need to raise the SCSI command timer value for the drives that
don't support SCT ERC. That way the kernel won't reset the link when
the drive hangs on a read error; this will allow the drive to report
the read error and allow Btrfs to fix the problem. If you don't do
this, long recover problems on these drives simply get worse until
there'd data loss. And as far as I know, Btrfs doesn't create another
copy in such a case.


> So to sum up.
>
> I have upgraded my btrfs-progs and I have mounted the filesystem with
> # mount -o degraded /dev/sdi1 /mnt/mountpoint
>
>
> I want to minimize my risk.

Backup the fking volume first. And whatever files don't backup, you
have to use btrfs restore on them if you ever want to retrieve them.

>
> Now I should do
> # btrfs device delete /dev/sdc1 ?
> or
> # btrfs check --repair --init-csum-tree ?

Try the first one. Just realize that any data that's on both the
failed drive and sdc1 will be lost, which is why you must have a
backup or use btrfs restore to start out. There is no way to re-add
sdc1 once you remove it.

-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux