MD1000 and PERC5/E I/O bottleneck

Eric Yablonowitz eyablon at
Fri Jun 1 18:02:33 CDT 2007


I am trying to tune the I/O for a PostgreSQL DB server, and I'm  
running into a bottleneck.


Dell PE-6950 w/ 32GB RAM and 4 dual core Opterons
PERC-5/E RAID card in slot 3 (PCIe-8x slot)
MD1000 array w/ 15 x 146GB 15K-RPM SAS HDDs


CentOS 5 (2.6.18-8.1.4.el5 x86_64)
bonnie++ for benchmarking

I am most concerned about WRITE I/O performance as my intention is to  
keep the DB small enough to stay inside of the OS cache (making reads  
unimportant once the cache is seeded).  It seems no matter what I do  
I cannot get better than about 280MB/s write throughput.

I configure the MD1000 with 2 7-HDD RAID 5 virtual disks (VDs).   
Here's what I get:

running one bonnie++ process on one virtual disk - 184MB/s
running two bonnie++ processes on one virtual disk - 210MB/s
Running three+ processes starts to degrade performance...

Ok, so it would seem that a single 7 spindle RAID 5 VD can handle  
around 210MB/s of writes.  Fair enough.  But I have two of these  
VDs.  What happens if I run bonnie++ on the second VD  
simultaneously?  My assumption was that I should be able to sustain  
similar throughput on both arrays simultaneously.   After all, the  
buses involved (SAS and PCIe) are nowhere near saturation at these  
rates.  But here's what I get:

running one bonnie++ process on each virtual disk - 138MB/s per  
process (276MB/s total throughput)
running two bonnie++ processes on each virtual disk - 64MB/s per  
process (128MB/s per VD and 256MB/s total throughput)

So these results imply that there is a bottleneck somewhere that is  
preventing me from getting anywhere near what my 14 spindles can  
handle.  Any thoughts on what this bottleneck might be?


More information about the Linux-PowerEdge mailing list