PERC/6i performance : RAID 0 vs RAID 10

Priyank Patel pkpatel.lists at gmail.com
Fri Sep 5 19:53:31 CDT 2008


I found the following Dell report on the perf analysis for PERC6

http://www.dell.com/downloads/global/products/pvaul/en/PERC6_PerfWP_WMD1120.pdf

Looking at the RAID 0 graphs on page 10, it seems that using the
write-through cache will have better performance than write-back in most
cases (except small blk size and less concurrency)

Any speculations on why that would be the case?

John, if you find something from your tech support ticket do post out for
the benefit of the general community.

- P

On Fri, Sep 5, 2008 at 9:37 AM, John LLOYD <jal at mdacorporation.com> wrote:

> > Message: 3
> > Date: Fri, 5 Sep 2008 00:55:48 -0700
> > From: "Priyank Patel" <pkpatel.lists at gmail.com>
> > Subject: PERC/6i performance : RAID 0 vs RAID 10
> > To: linux-poweredge at dell.com
> > Message-ID: <f4e12c70809050055wee27b7r89bc0a28dffe8e24 at mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Hi,
> > We are testing out the write performance of the PERC/6i RAID
> > controller with
> > poweredge servers.  Here are some very surprising numbers
> > from dd (with our
> > without O_DIRECT does not seem to matter) - using linux
> > kernel - 2.6.20
> >
> > ---
> ...snip...
> > ---
> >
> > dd results
> >
> > RAID 10 :
> > Write ~ 276 MBps
> >
> > RAID 0 :
> > Write ~ 25 MBps
> > Read ~ 540 MBps
> >
> > Its pretty confusing why the write performance with a RAID 0
> > across 6 disks
> > is so dismal.  We have tried using O_DIRECT, thereby getting rid of
> > file-system caching effects...
> >
> > Any idea whats going on over here ?
> >
> > Thanks,
> > -pkpatel
> >
> > PS : PERC/5i write performance seems to be as expected (we
> > dont have the
> > bencharmark numbers for this yet, but there have been no
> > issues so far)
>
> My own testing of a 6/E shows similar results.  RAID 0 is not good on
> 6/E -- md is much better.  Two disks assembled with mdadm is 120MB/s
> write and 123 MB/s read.  (Read normally is faster on disks).  Using the
> 6/E Raid-0 setting I get as little as 33 MB/sec.  I'm going to complain
> to my tech contact, and ask for a fix.  The performance is not in
> accordance with the documentation.
>
> FYI this is on RH 5.1 with patches.  The omreport vdisk and mdadm
> --detail reports are as follows; the individual disks are "raid-0" by
> themselves.
>
> ID                  : 2
> Status              : Ok
> Name                : p1
> State               : Ready
> Progress            : Not Applicable
> Layout              : RAID-0
> Size                : 372.00 GB (399431958528 bytes)
> Device Name         : /dev/sde
> Type                : SAS
> Read Policy         : No Read Ahead
> Write Policy        : Write Back
> Cache Policy        : Not Applicable
> Stripe Element Size : 64 KB
> Disk Cache Policy   : Disabled
>
> ID                  : 3
> Status              : Ok
> Name                : p2
> State               : Ready
> Progress            : Not Applicable
> Layout              : RAID-0
> Size                : 372.00 GB (399431958528 bytes)
> Device Name         : /dev/sdf
> Type                : SAS
> Read Policy         : No Read Ahead
> Write Policy        : Write Back
> Cache Policy        : Not Applicable
> Stripe Element Size : 64 KB
> Disk Cache Policy   : Disabled
>
> /dev/md2:
>        Version : 00.90.03
>  Creation Time : Thu Aug 28 12:51:28 2008
>     Raid Level : raid0
>     Array Size : 780140032 (744.00 GiB 798.86 GB)
>   Raid Devices : 2
>  Total Devices : 2
> Preferred Minor : 2
>    Persistence : Superblock is persistent
>
>    Update Time : Thu Aug 28 12:51:28 2008
>          State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>  Spare Devices : 0
>
>     Chunk Size : 128K
>
>           UUID : 3d0b2241:9ef301db:476dd429:d8a9605a
>         Events : 0.1
>
>    Number   Major   Minor   RaidDevice State
>       0       8       65        0      active sync   /dev/sde1
>       1       8       81        1      active sync   /dev/sdf1
>
> _______________________________________________
> Linux-PowerEdge mailing list
> Linux-PowerEdge at dell.com
> http://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20080905/1cc6c8ba/attachment-0001.htm 


More information about the Linux-PowerEdge mailing list