RE: Problems with multipathing between VMWare ESXi and LIO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

As an update, I switched frames to 1500 and disabled authentication. However, this did not help (but the version errors became consistent "version Min/Max 0x10/0x29"). 

I captured some traffic and apparently ESX is sending something different than what Linux is receiving (the captures are not from same session, but should give an idea). I need to get some crossover cables to do packet captures on the wire to see where the packets are getting messed up.

It's quite weird that 1 port works correctly, but 3 are failing... And that 1 is part of same network card as 2 of them (there's Intel 82545GM with 3 ports in use & 1x Intel 82576).

-Eljas

-----Original Message-----
From: Eljas Alakulppi 
Sent: 27. toukokuuta 2013 17:32
To: 'Thomas Glanzmann'; Nicholas A. Bellinger
Cc: target-devel@xxxxxxxxxxxxxxx
Subject: RE: Problems with multipathing between VMWare ESXi and LIO

Hello Nicholas & Thomas,

Thank you for advice.

The PBR  setup Thomas describes is the one I'm using.

> Can you verify if your able to reproduce without jumbo frames enabled..?
Okay, I will try this tomorrow.

> I think you misconfigured the target. The right configuration is:
If I understood that correctly, the only difference is that you have authentication disabled (which I would prefer to keep on). But thanks, I will try those direct commands. And just to confirm, missing LUN 0 was due to anonymization/part of another script?


> - I installed one physical server with debian wheezy (why on
>          earth has targetcli 1 GB of dependencies, I think I'll file a
>          bugreport, I mean it is one python script or am I mistaken?)
Known issue due to python-epydoc, http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=698753
"Hotfix" is apt-get --no-install-recommends install targetcli

> So probably using two VLANs, two subnets, one vmkernel port and portal in each makes things much, much easier and it just scales as well as the example I presented in my previous e-mail.
Actually it doesn't if you have more bandwidth from target than to single initiator (& balancing load can get harder). If you have 5x 1 Gpbs interfaces on your target and 3x 1Gpbs on two initiators, both would be using 2 dedicated ports and 1 shared port. If any single port goes down on target, there's 80% chance that maximum burst to one initiator will drop by 1Gbps (20% chance for usually better 0.5Gbps per initiator. But burst will always drop). If you are using same subnet (and thus each port on initiator has session to each target's port), while maximum bandwidth will drop by 1Gbps if both are utilized to maximum (from 5Gbps to 4Gbps), the burst bandwidth still stays at initiator's maximum of 3Gbps.

Attachment: login session not working demo.cap
Description: login session not working demo.cap

Attachment: login session working demo.cap
Description: login session working demo.cap

Attachment: esx-vmk2-demo.cap
Description: esx-vmk2-demo.cap


[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux