MD1220 + H800 nfs performance

Robert Horton r.horton at qmul.ac.uk
Wed Sep 22 08:45:16 CDT 2010


Hi,

I'm having some problems getting decent nfs performance from some
MD1220s connected to an R710. Here's a summary of the setup:

3 x MD1220 each with 24 x 500GB 7.2k SAS disk
All connected to a Perc H800 in an R710.

At present I have a single RAID60 volume with three spans of 23 disks,
so each array holds one span plus a hot standby. Stripe element size is
64k.

I'm testing the performance with:

iozone -l 1 -u 1 -r 4k -s 10g -e

and getting write performance of:

Direct to filesystem:       1076 MB/s
nfs via loopback interface: 217 MB/s
nfs via IPoIB:              38 MB/s
nfs via Ethernet:           24 MB/s

Based on testing other systems I would expect the nfs over Ethernet to
be around 100MB/s (ie saturating the GigE link) and the nfs over IPoIB
to be higher than that. I've tested the network links with nttcp and
there don't appear to be any problems.

I've tried various filesystems (ext3, ext4, xfs) but this didn't have a
significant effect.

I'm wondering:

1) Should the stripe size be smaller? Given that the nfs max block size
is 32KB each write is going to be less than one stripe..?

2) Is there a better way of arranging the disks? Given that I want the
dual parity I'm more or less stuck with some form of RAID 6, but I could
have more spans or create separate volumes and stripe them with LVM.

I'm happy to test different configurations but given the time needed to
reinitialise the array it would be good to get some pointers first...

Any thoughts would be appreciated.

Thanks,
Rob



More information about the Linux-PowerEdge mailing list