specs for speed of RAID?

Rechenberg, Andrew arechenberg at shermfin.com
Sat Nov 16 06:43:01 CST 2002


Does this crappy performance fall over to the DC/QC's?  We just got a
6600 with two PV220S's that are connected to a PERC3/QC.  If I can get a
lot better performance from software RAID, then I might consider trying
it.

Our's is a combination hardware RAID1 and software RAID0 (RAID10).  We
are using software RAID0 because the QC can only span 8 devices and we
wanted to span (stripe) across 12.  I'm wondering if we get some 39160's
in there and just use software RAID if we'll get similar performance as
the list member below.  

120-140MB/s is WAY faster than I'm seeing right now :)

Thanks for the info.

Regards,
Andy.

-----Original Message-----
From: CBeck at coradiant.com [mailto:CBeck at coradiant.com] 
Sent: Friday, November 15, 2002 5:56 PM
To: linux-poweredge at dell.com
Subject: Re: specs for speed of RAID?


Attached at the bottom is an email I received from a list member a wee
while ago when there was some heavy complaining about the PERC 2/3 Si/Di
crappy performance issue ... Craig seems to get some truly spectacular
results with software RAID 1+0 - _much_ faster than 46 MB/s

As a caveat of course RAID performance is highly susceptible to access
type
(random, sequential), stripe size, file size, file system parameters,
blah
blah blah ...
--
-Chris

|---------+------------------------------>
|         |           Alexander          |
|         |           Lazarevich         |
|         |           <alazarev at hera.itg.|
|         |           uiuc.edu>          |
|         |           Sent by:           |
|         |           linux-poweredge-adm|
|         |           in at dell.com        |
|         |                              |
|         |                              |
|         |           15.11.2002 17:16   |
|---------+------------------------------>
 
>-----------------------------------------------------------------------
-----------------------------------------------------------|
  |
|
  |       To:       DELL linux/poweredge email list
<linux-poweredge at dell.com>
|
  |       cc:
|
  |       Subject:  specs for speed of RAID?
|
 
>-----------------------------------------------------------------------
-----------------------------------------------------------|




Is there any place where I can get write/read real world testing specs
for
a PowerEdge 4600, with 8 drives in a RAID 5 array on a PERC3 QC? I've
looked on Dell's website but I can't find anything like that. Also
search
my documentation that came with the system and had no luck there.

I'm not looking for exact numbers, I'm just curious if the 46MB/sec. im
getting is in the right ballpark.

Thanks!

Alex


|---------+---------------------------->
|         |           "Craig I. Hagan" |
|         |           <hagan at cih.com>  |
|         |                            |
|         |           17.10.2002 20:52 |
|---------+---------------------------->
 
>-----------------------------------------------------------------------
-----------------------------------------------------------|
  |
|
  |       To:       CBeck at coradiant.com
|
  |       cc:
|
  |       Subject:  RE: RH7.3 [2.4.18-5] w/ 220S Perc 3/Di
|
 
>-----------------------------------------------------------------------
-----------------------------------------------------------|




no problems.

software raid using a single channel U160 scsi board.
i gave up on the perc's: too slow.

writing one large file i can see 75MB/s, for lots
of smaller io's it isn't fabulous (i'm a fan of raid10 for that).

if you really want your mysql to scream (ok, lock contention aside),
i would do this:

get the dell 14 disk shelf (dual ultra160 scsi), its the disk
pile people attach to their perc boards.

throw softraid on it and set up a raid 1+0 (10) array, where you mirror
across
controllers.

I've run the above on both a 2550 (ran it out of pci bandwidth) and a
2650
(only about 15% faster than the 2550, so it was close).

The advantage of raid10 for things like databases is that much of your
io
load
is random small updates. This is the worst case for raid5 as you need to
do
a
read (load in the stripe), modify the stripe, recompute the checksum of
thre
stripe, then finally write it back out. none of that needs to be done
for
raid10, just throw the block out to disks (its a mirror).

my raid10 numbers were about 120-140MB/s write and about 200MB/s read.


FYI: i've just finished rebuilding my test box again, so i'll be able to
run
some tests against the above configuration for you, if you want. I've
got
14
70GB disks, 7 per channel (all dell parts, so its easy to order). It is
attached to a 2650 with dual 2.4ghz xeons. I was planning on setting it
back up
with 1/4 raid0, 1/4 raid10, 1/4 raid5, 1/4 raid50 (multiple raid5 sets
striped
together) for some testing.

If you can give me a rough profile of your io load, i may be
able to simulate it.


>
> Apologies to reply off-list and get all personal and stuff.
> A quick question - are you saying that you have somehow configured
(8+1+1)
> + (mirror) using a Perc controler and are getting 75 MB/s throughput?
Or
> is that with software raid to an external encolosure? Are you writing
one
> large file?  Many small files?  I'm trying to figure out what hardware
I
> can use for a MySQL DB that has read/write contention on the same
table
> (the worst thing you can ask MySQL to do).
> --
> -Chris Beck - Coradiant, Inc - Research & Development -
+1.514.908.6314-
> -- http://www.coradiant.com - Leaders in Web Performance Optimization
--
> ------- This email represents my opinion, not that of Coradiant.
-------
> Opinions of bureaucrats do not create wrongs.
>
>
>
> > It's more of a firmware issue. 53MB/s is not bad for a raid 5
though.
> > Although I admit that I can generate that much of disk IO using just
3
> 10K
> > rpm disks on a 3/DC
>
> for an expensive hardware raid board? that (sorry ami/lsi) is pretty
poor.
>
> using linux's software raid i can push about 60-65MB/s on a p3 and
about
> 70-75MB/s on a p4 with the same shelf.
>
> > Ah software raid on the 8 disk config using linux is next though and
that
> > might turn up some nice numbers :-)
>
> do you want really good numbers? do this:
>
> 8+1 raid5, with a hot spare disk (8+1+1), partition the two remaining
disks
> on
> the shelves and create a mirrored 2GB partition. build your filesystem
on
> the
> 8+1+1, but put the journal on the mirrored partition. With that, I can
get
> about 75MB/s for streaming writes on a p3 (writing 2x-4x memory, so
there
> aren't much in the way of caching effects).
>
> -- craig
>
>
>
>                .-    ... . -.-. .-. . -    -- . ... ... .- --. .
>
>                                          Craig I. Hagan
>                                         hagan(at)cih.com
>
>
>
> _______________________________________________
> Linux-PowerEdge mailing list
> Linux-PowerEdge at dell.com
> http://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq or search the list
> archives at http://lists.us.dell.com/htdig/
>
>
>
>

--



               .-    ... . -.-. .-. . -    -- . ... ... .- --. .

                                         Craig I. Hagan
                                        hagan(at)cih.com






_______________________________________________
Linux-PowerEdge mailing list
Linux-PowerEdge at dell.com
http://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq or search the list
archives at http://lists.us.dell.com/htdig/




More information about the Linux-PowerEdge mailing list