[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Google
  Web www.spinics.net

Belated SUMMARY: LVM, Fibre channel cards and disappearing volumes



I only received a single replay on this, from Vic Engle. Vic suggested I
look at the multipathing component of the drivers. Unfortunately, I
haven't had an opportunity to look at this in any detail (i.e., we don't
have a similar, non-production box to test).

To answer the second part of my question mysel, it doesn't look like
there's an issue getting the box to recognize the SAN volume during
bootup.

I added this to a startup script and run it after everything else is
mounted but before anythint that requires the SAN volume starts:

echo "scsi add-single-device 3 0 0 2" >/proc/scsi/scsi
pvscan
vgscan
vgchange -a y
mount -a -t ext3

Thanks to some issues with the SAN product itself, this has gotten a
fairly thorough testing.
--Michael

******Original question******

This is a two-part question.

I am having a problem with SAN volumes attached to a server running Red
Hat 8 (2.4.20-20.8smp kernel) and LVM which we put into production this
morning (admittedly a little fast, but it couldn't be avoided). The
server is attached to the SAN with a pair of IBM FAStT FC2-133 Fibre
channel cards, using the QLogic device driver 6.06.00 in failover mode
(the modules appear as qla2300 and qla2300-conf). The SAN volumes appear
as regular SCSI devices (i.e., /dev/sdb, /dev/sdc and /dev/sdd).

I have lost one of the volumes on three separate occasions. Twice it was
after a reboot (including this morning when we put the machine into
production), and once after testing failover (i.e, after I yanked one of
the cables). The device was not accessible at all. However, the other
volumes worked just fine.

So the first past of my question is: has anybody out there had any
experience with this driver and/or some useful hints? Since we have
access to some of the SAN volumes at all times and the data is fine,
I have to say it looks like a driver issue to me (although I'm not
willing to rule anything out just yet).

Recovery from the loss of a volume looks something like this:

echo "scsi add-single-device 3 0 0 2" >/proc/scsi/scsi
 
The arguments after add-single-device are the adapter, the channel, the
target and the lun, I believe.

Then, after much cursing and fumbling around to find the right commands:
 
vgscan -v
vgchange -a y

(I may have left out a step or two there.)

Then I can mount the volume as normal.

Part two of my question is this: is it safe, wise and/or effective to
script the above and add it to my startup scripts in case it can't mount
all the SAN volumes at boot?  As you might guess, it isn't convenient
to lose all the data if the machine reboots, and you know this never
happens during business hours.

For the record, I have no idea how it knows which volume holds the most
important data, as this is the volume which has disappeared each time
(despite the fact that I've changed it in between disappearances).
submissions: LinuxManagers@linuxmanagers.org
subscribe/unsubscribe: http://www.linuxmanagers.org/mailman/listinfo/linuxmanagers
_______________________________________________
LinuxManagers mailing list - http://www.linuxmanagers.org
submissions: LinuxManagers@linuxmanagers.org
subscribe/unsubscribe: http://www.linuxmanagers.org/mailman/listinfo/linuxmanagers

[Home]     [Kernel List]     [Linux SCSI]     [Video 4 Linux]     [Linux Admin]     [Yosemite News]     [Motherboards]

Powered by Linux