PowerConnect 2748 + Linux + link aggregation

Thomas_Chenault at Dell.com Thomas_Chenault at Dell.com
Thu Mar 1 18:09:37 CST 2007

To me it sounds like the 802.3ad bond is failing to aggregate its
members. The contents of /proc/net/bonding/bond<x> should provide more
information. Specifically, the active aggregator for the bond should be
listed and the aggregator ID for each slave should be listed. If one or
more of the slaves has a different aggregator ID than the bond then the
aggregation has been less than fully successful. One possible cause of
failure to aggregate is configuring static link aggregation groups on
the switch rather than LACP groups.


-----Original Message-----
From: linux-poweredge-bounces at dell.com
[mailto:linux-poweredge-bounces at dell.com] On Behalf Of Steve Thompson
Sent: Thursday, March 01, 2007 3:51 PM
To: linux-poweredge-Lists
Subject: PowerConnect 2748 + Linux + link aggregation

I have four new PE2900 systems running 64-bit CentOS 4.4 (equivalent to 
RHEL 4 U4). Each has the two built-in Broadcom NIC's (eth0, eth1) and an

add-in Intel Pro 1000 card (eth2, eth3), and a PowerConnect 2748 switch.
can properly identify which ethX is which NIC port (and yes, they are 
detected in opposite order to the labeling). Everything has the latest 
firmware. With the switch in either managed or unmanaged mode, I can 
attach any of the four interfaces of each system to the switch, one at a

time with a suitable IP address and netmask, and can happily
so all NIC's and cables appear to be good.

I am trying to use channel bonding (not a newbie), with eth0+eth2 as 
bonding pair bond0 attached to two adjacement ports on the switch 
(eth1+eth3 are not in use). That is, the bonding pair is the first NIC
each card. Same for each system. From /etc/modprobe.conf:

 	alias bond0 bonding
 	options bond0 mode=802.3ad miimon=100

and I believe that the ifcfg-ethX configurations are correct.

There are four LAG groups on the switch, each with two port members, and

the cables from each host are connected to the proper switch ports. The 
switch is in managed mode and has IP; no other switch 
configuration changes were made. The four systems have IP addresses (.11, .12, .13). Everything comes up; NIC's say we have
link, switch says we have a link, ethtool says we have a link. Only 
problem is that we can not pass packets between any systems. In
at any one time, only 50% of the systems can communicate with the switch

IP, and rebooting all the systems changes which 50%. Using autonegotiate

or fixed 1000-full makes no difference. If I leave bonding configured on

each host but delete each LAG group on the switch, all four systems can 
communicate with each other, except that I lose around 50% of the
(eg with ping). Help! I've done something dumb but can't work out what

Steve Thompson                 E-mail:      smt AT vgersoft DOT com
Voyager Software LLC           Web:         http://www DOT vgersoft DOT
39 Smugglers Path              VSW Support: support AT vgersoft DOT com
Ithaca, NY 14850
   "186,300 miles per second: it's not just a good idea, it's the law"

Linux-PowerEdge mailing list
Linux-PowerEdge at dell.com
Please read the FAQ at http://lists.us.dell.com/faq

More information about the Linux-PowerEdge mailing list