RHEL 5.3 & Multipathing question

Brian O'Mahony brian.omahony at curamsoftware.com
Fri Feb 19 04:50:09 CST 2010


I have setup multipathing using the Dell Equalogic documentation and it was easy enough (I did have to call tech support with an issue that turned out to be multiple blacklisting in my conf file).

However, at the start of the document it says to update to RHEL5.4 asap due to performance issues.

If this is the case I am going to have to get the department this server is for to go back through their testing process of the software they are using etc, which will probably knock us back two weeks +

Does anyone have any documentation on this issue?

B

-----Original Message-----
From: linux-poweredge-bounces at dell.com [mailto:linux-poweredge-bounces at dell.com] On Behalf Of Eric Searcy
Sent: Thursday, February 18, 2010 10:24 PM
To: linux-poweredge at dell.com
Subject: Re: RHEL 5.3 & Multipathing question

On Feb 17, 2010, at 4:13 AM, Brian O'Mahony wrote:

> I am setting up a RHEL5.3 machine connected to an iSCSI SAN (PS6000). The machine is a PE2850. I have two onboard network ports and an Intel NIC with two ports. One from each is connected to our lan, and the other on each is connected to our SAN network, which is segregated from everything else.
>  
> This is my first time setting up access for something with fault tolerance (previously it was all just for testbeds etc). I *had* originally set up the two SAN nics as a bond with the failover set to active-passive.
>  
> As I was reading more documentation, I came across multipathing, and I am wondering if it is needed in my case. The machine is going to be the only machine connected to the LUN presented by the PS6000. The LUN is 500Gb, and this will be chopped down further using the OS (either ext3 or ext4) into 10x50Gb logical volumes.

Bonding and dm-multipath don't go together as near as I can tell.  With dm-multipath you need at least two devices (paths), and with active-backup bonding, I don't think you will have two distinct devices.  If you've configured your client drivers to scan for volumes on, say, bond0, you should only "see" one device in my experience.

> Is multipath really needed and/or necessary in this case? Why?

So, in the case where you're using bonding, I'd say it's dichotomous.  As for whether you'd be better off using dm-multipath *instead*, one point is that even with the higher levels of bonding that provide load balancing (like mode=5), you usually can't split traffic destined for the same IP, though I suppose you might be able to share different volumes on different IPs/MACs with your iSCSI server?

In terms of failover, I don't actually know how fault-tolerant bonding is when used with iSCSI.  I think it would depend on what you set your miimon interval too, and whether or not TCP for iSCSI would ensure reliable delivery across an outage of that threshold (at which point the gratuitous ARP should have updated the layer 2 routing table in the switch and the ARP table on the iSCSI server and communication would resume).

At one point I was using mode=1 bonding with AoE, but I haven't run iSCSI over bonding.  Since AoE is below the IP layer, I'd assume that iSCSI would work even better with mode=1 bonding since the IP layer should provide delivery insurance (though the AoE driver might have been doing it's own error detection/correction for lost packets).  But I thought I'd throw in my 2c anyhow since the last email seemed to be more about why you'd want redundancy/load-balancing rather than addressing the active-passive bonding comment.

Eric

_______________________________________________
Linux-PowerEdge mailing list
Linux-PowerEdge at dell.com
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq


The information in this email is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this email by anyone else
is unauthorized. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you are not the intended
addressee please contact the sender and dispose of this e-mail. Thank you.




More information about the Linux-PowerEdge mailing list