Re: RAID6 grow failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 27 Mar 2012 22:44:18 -0400 Bryan Bush <bbushvt@xxxxxxxxx> wrote:

> I hope this is the right place to ask this question.  I have at 8
> drive RAID 6 array that I wanted to grow to 13 drives (adding 5 more).
>  I issued the mdadm command and checked /proc/mdstat and all looked
> well.  However at some point in time a disk failed and that hung my
> system.  Upon reboot the array is inactive and I can't get it to
> reassemble.
> 
> /proc/mdstat shows this
> 
> md1 : inactive sdp1[11](S) sdi1[3](S) sdd1[7](S) sdr1[13](S)
> sdg1[1](S) sdc1[6](S) sdq1[12](S) sdn1[9](S) sdo1[10](S) sdh1[2](S)
> sda1[4](S) sdf1[0](S) sdb1[8](S)
>       25395674609 blocks super 1.2
> 
> 
> If I look at mdadm -E /dev/sdX1 I see most are State active, while
> some are State clean.
> 
> 
> root@diamond:~# mdadm -E /dev/sd[abcdfghinopqr]1
> mdadm: metadata format 01.02 unknown, ignored.
> mdadm: metadata format 00.90 unknown, ignored.

Hmmm... what do you have in /etc/mdadm.conf??


> 
> Is there anything I can do to get the array back up?

stg1 is the device that failed. so

 mdadm -S /dev/md1
 mdadm -A -f /dev/md1 /dev/sd[abcdefhinopqr]1

should start the array.

Though if the names have changed at all it would be safer to do

  mdadm -Asf /dev/md1 -u fa32e2c5:e7bda20b:32af7c90:c7ee61eb

then mdadm will find the right devices and use them.

When reshape finishes you will need to add sdg1 or a replacement and let it
recover.

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux