Performance of MD1220 on Perc H800 slower than MD1120 on Perc/6E

Marc Stephenson marc.stephenson at lithium.com
Mon Aug 16 14:59:10 CDT 2010


I've noticed serious performance issues with MD1220's compared to MD1120's. I would expect to see at least the same performance on my MD1220 as the MD1120 however it's actually 40% slower. An example of the actual numbers shows both of my MD1220's performing around 24Mb/sec of random IO  while my MD1120 is performing around 41MB/sec of random IO. The MD1220 should be at least as fast if not faster since it advertises 6Gbps SAS. I've spoken to Dell and they advised me to upgrade the firmware on the H800's and update the megaraid_sas drive to version 00.00.04.29. I've done both of these and performance has been unchanged. Their next recommendation was to try installing RHEL 5 which I'm working on now. Has anyone else seen performance problems on their MD1220's?

Here are the specs of the disk arrays and servers used in my bechmark:

Server 1
R610 with MD1220 attached to Perc H800 Controller
RAID-10 Disk set across 16x300G disks with 128K stripe size and read-ahead enabled
48G of physical memory
Centos 5.5 with default xfs filesystem and default mount options

Server 2
R610 with MD1220 attached to Perc H800 Controller
RAID-10 Disk set across 16x300G disks with 128K stripe size and read-ahead enabled
48G of physical memory
Centos 5.5 with default xfs filesystem and default mount options

Server 3
R610 with MD1120 attached to Perc 6E Controller
RAID-10 Disk set across 16x300G disks with 128K stripe size and read-ahead enabled
48G of physical memory
Centos 5.5 with default xfs filesystem and default mount options

Here is the result of my sysbench benchmark across all 3 servers:

Server 1 (MD1220):
# sysbench --test=fileio --max-time=60 --max-requests=1000000  --file-num=128 --file-extra-flags=direct --file-fsync-freq=0  --file-total-size=150G --num-threads=64 --file-test-mode=rndrw prepare
# sysbench --test=fileio --max-time=60 --max-requests=1000000  --file-num=128 --file-extra-flags=direct --file-fsync-freq=0  --file-total-size=150G --num-threads=64 --file-test-mode=rndrw run
1549.63 Requests/sec executed, 24.213Mb/sec

Server 2 (MD1220):
# sysbench --test=fileio --max-time=60 --max-requests=1000000  --file-num=128 --file-extra-flags=direct --file-fsync-freq=0  --file-total-size=150G --num-threads=64 --file-test-mode=rndrw prepare
# sysbench --test=fileio --max-time=60 --max-requests=1000000  --file-num=128 --file-extra-flags=direct --file-fsync-freq=0  --file-total-size=150G --num-threads=64 --file-test-mode=rndrw run
1558.17 Requests/sec executed, 24.346Mb/sec

Server 3 (MD1120):
# sysbench --test=fileio --max-time=60 --max-requests=1000000  --file-num=128 --file-extra-flags=direct --file-fsync-freq=0  --file-total-size=150G --num-threads=64 --file-test-mode=rndrw prepare
# sysbench --test=fileio --max-time=60 --max-requests=1000000  --file-num=128 --file-extra-flags=direct --file-fsync-freq=0  --file-total-size=150G --num-threads=64 --file-test-mode=rndrw run
2680.38 Requests/sec executed, 41.881Mb/sec
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20100816/3e511673/attachment.htm 


More information about the Linux-PowerEdge mailing list