Question about bonding nics and mtu configuration

justin.kinney at academy.com justin.kinney at academy.com
Mon Jun 9 13:53:44 CDT 2008


> We have the RAC up with a single nic for the public interface, the 
> private interconnect and the ISCSI connection to the array.  The MTU
> on all these nics is 1500.

Note: you must set the MTU for iSCSI connection to 9000 as well.  If you 
don't, I wouldn't expect the host to accept jumbo frames.

> We want to bond one of the Broadcom nics and one of the Intel nics 
> for the public interface using an MTU of 1500. 
> 
> We want to bond the other Broadcom nic with an Intel nic and use it 
> for the RAC interconnect using an MTU of 9000. 

This is no problem at all - just apply the MTU setting to the bond device 
(ifconfig bond0 mtu 9000)

> We want to bond the other 2 Intel nics and use them to connect to 
> the NX1950 using ISCSI with an MTU of 9000.               

Again, just set the MTU on the bond device (ifconfig bond1 mtu 9000)

> So, is there a bonding mode that can be used for the public 
> interface that does not require switch configuration?

mode 0 - (balance-rr), 5 (adaptive-tlb), and 6 (balance-tlb) do not 
require any special switch configuration.

> Is there a bonding mode compatible with the Dell switch that we do 
> control for the interconnect?  I am trying to learn the switch as I 
> go and it seems that it supports 802.ad.  Is there a bonding mode 
> that goes with that?

Be sure to set the MTU on the 6248 to 9000 as well (Switching->Ports->Port 
Configuration-> Maximum Frame Size).

It looks like your switch also supports LACP Link aggregation (802.3ad). 
To make this work, configure your bond with mode 4 and add the ports in 
the switch to a LAG group.

> Also, we tried setting the MTU to 9000 on the RAC interconnect and 
> one node started but the other two would not come up.  Sorry for no 
> more information but I don?t have an error code at this time, but 
> does anyone have advice for me about this?

The only advice I can offer is that you must have the mtu set the same 
between the two hosts, plus enable jumbo frames on your switch.

> Finally, does anyone have suggestions for testing these connections 
> once we have the bonding set up?

iperf is a great tool to measure interface bandwidth (
http://sourceforge.net/projects/iperf).  iftop is also a great way to 
monitor the bond once you've got traffic moving between the box and the 
switch.

Hope this helps,
Justin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20080609/e6c2f546/attachment.htm 


More information about the Linux-PowerEdge mailing list