RAID-5 and database servers

Craig White craigwhite at
Fri Mar 12 09:39:52 CST 2010

On Fri, 2010-03-12 at 07:06 +0000, Jefferson Ogata wrote:
> On 2010-03-12 04:26, Craig White wrote:
> > On Fri, 2010-03-12 at 02:23 +0000, Jefferson Ogata wrote:
> >> On 2010-03-11 22:23, Matthew Geier wrote:
> >>> I've had a disk fail in such a way on a SCSI array that all disks on
> >>> that SCSI bus became unavailable simultaneously. When half the disks
> >>> dropped of the array at the same time, it gave up and corrupted the RAID
> >>> 5 meta data so that even after removing the offending drive, the array
> >>> didn't recover.
> >> I also should point out (in case it isn't obvious), that that sort of
> >> failure would take out the typical RAID 10 as well.
> > ----
> > ignoring that a 2nd failed disk on RAID 5 is always fatal and only 50%
> > fatal on RAID 10, I suppose that would be true.
> The poster wrote that all of the disks on a bus failed, not just a
> second one. Depending on the RAID structure, this could take out a RAID
> 10 100% of the time.
actually, this is what he wrote...

"When half the disks dropped of the array at the same time, it gave up
and corrupted the RAID 5 meta data so that even after removing the
offending drive, the array didn't recover."

Half != all

I had a 5 disk RAID 5 array fail the wrong disk and thus had 2 drives go
offline and had a catastophic failure and thus had to re-install and
recover from backup once (PERC 3/di & SCSI disks). Not something I wish
to do again.
> In your "second disk" scenario, comparing RAID 5 with RAID 10 in terms
> of failure likelihood isn't fair; you need to compare RAID 50 with RAID
> 10. And the odd depend on the number of disks and the RAID structure.
> Suppose you have 12 disks arranged as a 6x2 RAID 10, and the same number
> of disks as a 2x6 RAID 50. When the second disk fails the odds of loss are:
> - RAID 50: 5/11.
> - RAID 10: 1/11.
> If instead we have the 12 disks as a 3x4 RAID 50, then the odds of loss
> when the second disk fails are:
> - RAID 50: 3/11.
> - RAID 10: 1/11.
> We can now tolerate a third disk failure with our RAID 50 with the odds
> of loss:
> - RAID 50: 6/10.
> - RAID 10: 2/10.
> How often does this happen? It hasn't happened to me, and it hasn't
> happened to anyone I know.
I don't think I understand your 'odds' model. I interpret the first
example as RAID 50 having 5 times more likelihood of loss than RAID 10
and I presume that isn't what you were after
> In the alternative fair comparison, RAID 5 vs. RAID 1, the second
> failure kills both RAIDs 100% of the time.
actually, I didn't raise the RAID 5 vs RAID 10 comparison, I only
amplified with my experiences
> It's pretty clear you don't speak from any recent experience as far as
> RAID 5 performance goes, and you yourself say as much when you say you
> "had already forsaken RAID 5". Like Oracle, you're living in the past.
> You should do some of your own benchmarks.
I'd agree with that assessment... I gave up on RAID 5 a few years ago.

In addition, reading the previously linked article in tells me that when I use SATA drives, I should
avoid RAID 5... good enough for me.
> In any case, the argument in that article applies to RAID 10 as well; it
> gives you better probabilities but eventually it will take too long to
> rebuild mirrors and failure will be just as inevitable as with RAID 5.
> Error rates will have to drop to prevent this, and no doubt they will,
> sufficiently that the article's argument is moot. Eventually they will
> drop to the point where we will be using RAID 0.
> >  On top of that,
> > it seems to me that RAID 10 smokes RAID 5 on every performance
> > characteristic my clients are likely to use (and yes, that means
> > databases). RAID 5 primarily satisfies the needs for maximum storage for
> > the least amount of money and that was rarely what I need in a storage
> > system for a server.
> For a lot of access patterns, RAID 5 yields much better write bandwidth
> than RAID 10. I don't know why you think RAID 10 "smokes" RAID 5. You
> should grab a PERC 6 and a couple of MD1000s and try some different
> configurations. I don't think you'll see any smoke in the margins, even
> over the oddly limited gamut of access patterns your clients use.
the last time I bought an MD-1000, Dell would only sell me the PERC-5e,
I don't know why. I could see possibly using RAID 50 but RAID 5 is just
not a path I want to venture any more.


This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

More information about the Linux-PowerEdge mailing list