Re: Replacing a (or two?) failed drive(s) in RAID-1 btrfs filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I understood my mistake on using consumer drives and this is why I
bought the RED versions some days ago. I would have done this earlier
if I had the money.

So to sum up.

I have upgraded my btrfs-progs and I have mounted the filesystem with
# mount -o degraded /dev/sdi1 /mnt/mountpoint


I want to minimize my risk.

Now I should do
# btrfs device delete /dev/sdc1 ?
or
# btrfs check --repair --init-csum-tree ?


constantine


On Sun, Feb 8, 2015 at 11:34 PM, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
> On Sun, Feb 8, 2015 at 4:09 PM, constantine <costas.magnuse@xxxxxxxxx> wrote:
>> By the way, /dev/sdc just completed the extended offline test without
>> any error... I feel so confused,....
>
> First, we know from a number of studies, including the famous (and now
> kinda old) Google study that  a huge percent of drive failures come
> with no SMART errors.
>
> Second, SMART is only saying its internal test is good. The errors are
> related to data transfer, so that implicates the enclosure (bridge
> chipset or electronics), the cable, or the controller interface.
> Actually it could also be a flaky controller or RAM on the drive
> itself too which I don't think get checked with SMART tests.
>
> --
> Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux