Re: Replacing a drive from a RAID 1 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2015-06-16 12:58, Hugo Mills wrote:
On Tue, Jun 16, 2015 at 06:43:23PM +0200, Arnaud Kapp wrote:
Hello,

Consider the following situation: I have a RAID 1 array with 4 drives.
I want to replace one the drive by a new one, with greater capacity.

However, let's say I only have 4 HDD slots so I cannot plug the new
drive, add it to the array then remove the other one.
If there a *safe* way to change drives in this situation? I'd bet that
booting with 3drives, adding the new one then removing the old, non
connected one would work. However, is there something that could go
wrong in this situation?

    The main thing that could go wrong with that is a disk failure. If
you have the SATA ports available, I'd consider operating the machine
with the case open and one of the drives bare and resting on something
stable and insulating for the time it takes to do a "btrfs replace"
operation.
This would be my first suggestion also; although, if you only have 4 SATA ports, you might want to invest in a SATA add in card (if you go this way, look for one with an ASmedia chipset, those are the best I've seen as far as reliability for add on controllers).

    If that's not an option, then a good-quality external USB case with
a short cable directly attached to one of the USB ports on the
motherboard would be a reasonable solution (with the proviso that some
USB connections are just plain unstable and throw errors, which can
cause problems with the filesystem code, typically requiring a reboot,
and a restart of the process).
If you decide to go with this option and are using an Intel system, avoid using USB3.0 ports, as a number of Intel's chipsets have known bugs with their USB3 hardware that will likely cause serious issues. If your system has an eSATA port however, try to use that instead of USB, it will almost certainly be faster and more reliable.

    You might also consider using either NBD or iSCSI to present one of
the disks (I'd probably use the outgoing one) over the network from
another machine with more slots in it, but that's going to end up with
horrible performance during the migration.
The other possibility WRT this is ATAoE, which generally gets better performance than NBD or iSCSI but has the caveat that both systems have to be on the same network link (ie, no gateways between them). If you do decide to use ATAoE< look into a program called 'vblade' (most distro's have it in a package with the same name).

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux