MD1000 and PERC5/E I/O bottleneck

Eric Yablonowitz eyablon at tripadvisor.com
Mon Jun 4 09:30:53 CDT 2007


One additional tidbit:

If I run the same tests simultaneously against the 5 internal disks  
(RAID-5) connected to the PERC-5/i and one of the 7 disk (RAID-5)  
arrays in the MD1000 connected to the PERC-5/E, I get much better  
total throughput than I get writing to the two different virtual  
disks on the PERC-5/E/MD1000.

So this seems to be a limitation of the PERC-5/E.  VERY SURPRISING  
considering this card is designed to handle several MD1000 or MD3000  
arrays connected in a SAS daisy chain.  If I can't even max out the  
performance of a SINGLE MD1000, I won't get any performance increase  
from adding additional arrays.

Eric

On Jun 1, 2007, at 7:02 PM, Eric Yablonowitz wrote:

> Hello,
>
> I am trying to tune the I/O for a PostgreSQL DB server, and I'm  
> running into a bottleneck.
>
> Hardware:
>
> Dell PE-6950 w/ 32GB RAM and 4 dual core Opterons
> PERC-5/E RAID card in slot 3 (PCIe-8x slot)
> MD1000 array w/ 15 x 146GB 15K-RPM SAS HDDs
>
> Software:
>
> CentOS 5 (2.6.18-8.1.4.el5 x86_64)
> bonnie++ for benchmarking
>
> I am most concerned about WRITE I/O performance as my intention is  
> to keep the DB small enough to stay inside of the OS cache (making  
> reads unimportant once the cache is seeded).  It seems no matter  
> what I do I cannot get better than about 280MB/s write throughput.
>
> I configure the MD1000 with 2 7-HDD RAID 5 virtual disks (VDs).   
> Here's what I get:
>
> running one bonnie++ process on one virtual disk - 184MB/s
> running two bonnie++ processes on one virtual disk - 210MB/s
> Running three+ processes starts to degrade performance...
>
> Ok, so it would seem that a single 7 spindle RAID 5 VD can handle  
> around 210MB/s of writes.  Fair enough.  But I have two of these  
> VDs.  What happens if I run bonnie++ on the second VD  
> simultaneously?  My assumption was that I should be able to sustain  
> similar throughput on both arrays simultaneously.   After all, the  
> buses involved (SAS and PCIe) are nowhere near saturation at these  
> rates.  But here's what I get:
>
> running one bonnie++ process on each virtual disk - 138MB/s per  
> process (276MB/s total throughput)
> running two bonnie++ processes on each virtual disk - 64MB/s per  
> process (128MB/s per VD and 256MB/s total throughput)
>
> So these results imply that there is a bottleneck somewhere that is  
> preventing me from getting anywhere near what my 14 spindles can  
> handle.  Any thoughts on what this bottleneck might be?
>
> Thanks,
> Eric



More information about the Linux-PowerEdge mailing list