Follow up on my previous post regarding bonding intel gig-ethernetNICs on Poweredge 2900-III

Jefferson Cowart Jefferson.Cowart at libraries.claremont.edu
Mon Sep 15 11:47:07 CDT 2008


In general bonding multiple interfaces together will give you larger aggregate bandwidth, but it will not increase the bandwidth of any individual flow above the bandwidth of a single link. While this may very depending on which bonding method you are using, in general there is some form of a hash run on either the source or destination IP/MAC. Based on the result of that hash one of the links in the bond is used for that flow. As a result, other flows may also simultaneously be able to get ~1GB bandwidth using a different physical link in the bond, but no flow can get above the 1GB link speed.

--
Thank You
Jefferson Cowart
Network and Systems Administrator
Libraries Information Technology



-----Original Message-----
From: linux-poweredge-bounces at dell.com on behalf of Drew Weaver
Sent: Mon 9/15/2008 9:35 AM
To: 'linux-poweredge at dell.com'
Subject: Follow up on my previous post regarding bonding intel gig-ethernetNICs on Poweredge 2900-III
 
                Hi there.

So as you all had predicted it was fairly easy to use the kernel bonding support to bond 4 Gig-E ports into an etherchannel connection under CentOS 5.2.

I have two Poweredge 2900-III servers connected to the same 2960 switch, each server has a 4Gb etherchannel (bond0) connection to the switch.

When I use iperf to test the transfer rates between the two systems it never goes above 944Mbps.

Does anyone have any advice or has anyone ever gotten this to work any better?

Thank you,
-Drew






More information about the Linux-PowerEdge mailing list