network card bandwidth on a poweredge 1850?

Brett Dikeman brett.dikeman at gmail.com
Tue Mar 3 20:42:28 CST 2009


On Tue, Mar 3, 2009 at 6:39 PM, Jason Slagle <raistlin at tacorp.net> wrote:

> At least Netbackup seems to require a good amount of tweaking to write at
> local drive speeds from a network source.

One vendor's incompetence does not imply another's.  I get what you're
saying, however, and I have been going through a guide I found from
Sun that seems to cover Networker from a performance standpoint.

 Very little that can be done with st; only some buffer sizes can be
tweaked, and it appears RedHat compiled it with a fixed buffer size,
disabling buffer_kbs, or I'm misreading st's output to dmesg.

> One way to test would be to use something like dd and netcat to just write
> /dev/zero on one end to /dev/null on the other - keeping other IO out of
> the picture.

Yes, I plan on testing using iperf, which provides a little more
insight into what's going on.  netcat just leaves you with an average
write speed.  Further detail requires something like iptraf, which
unfortunately proved itself unable to handle more than about 600Mbit
of traffic, at least on this hardware.

> You using jumbo frames?

Nope. The system didn't have any trouble pulling ~115MB/sec over a
single NIC, and that's stellar in my book.  While not using jumbo
frames would impact ultimate efficiency (*roughly* 920Mbit of transfer
from 980Mbit of traffic, a 6% difference), I don't think it will yield
any appreciable gains.

>  TCP/UDP?  If TCP are you offloading?

Offloading is turned on by default, though the built-in NICs don't
support UDP fragmentation offloading or "general" offloading (however,
that I was able to flip the latter on.  No difference.)  I also
expanded net.core.wmem_max / net.core.rmem_max and the RX/TX ring
buffers on both cards, to no effect.

Brett



More information about the Linux-PowerEdge mailing list