RAID-5 and database servers

Craig White craigwhite at
Thu Mar 11 22:26:46 CST 2010

On Fri, 2010-03-12 at 02:23 +0000, Jefferson Ogata wrote:
> On 2010-03-11 22:23, Matthew Geier wrote:
> > I've had a disk fail in such a way on a SCSI array that all disks on
> > that SCSI bus became unavailable simultaneously. When half the disks
> > dropped of the array at the same time, it gave up and corrupted the RAID
> > 5 meta data so that even after removing the offending drive, the array
> > didn't recover.
> I also should point out (in case it isn't obvious), that that sort of
> failure would take out the typical RAID 10 as well.
ignoring that a 2nd failed disk on RAID 5 is always fatal and only 50%
fatal on RAID 10, I suppose that would be true.

I've been reading this thread and the thread about Dell's pricing of
SATA disks pretty much in silence and have been wondering about some of
the massive generalizations and limited scope opinions that have been
been expressed on this list and figure that it's probably time for me to
pipe in with my underinformed view.

RAID is a great tool and traditionally servers have been sold with high
grade hardware (controllers & hard drives) but of course the pressure is
always on to get maximum amount of storage for a minimum amount of cost
so it seems that we cannot find RAID controller hardware that is cheap
enough or hard drives cheap enough. The truth is that the SATA
controllers are fairly marginal and some of the SATA drives are really
not suitable for putting into a server that you expect some durability
and stability over time. Not that is going to stop people from buying
them anyway.

So if Dell is selling a high quality hard drive with more than average
durability and the anticipation that it is going to last longer under
24/7 usage, its entirely reasonable to have to pay more than the
cheapest dirt SATA drive you can find online. Of course you will have to
live with the consequences if you go with the dirt cheap drive.
Personally, I put a lot of value on my time and my customers data.

I read this article last year...

and I had already forsaken RAID 5 but it pretty much confirmed what my
experiences had been... that when I considered the life cycle of the
installation, the time lost in waiting for file transfer, etc. on RAID
5, etc. that it was foolish for me to recommend RAID 5 to anyone. 

It's not that RAID 5 doesn't work... it does. It's not that it is prone
to failure, it's not (well this article is suggesting that the more
drives you have in a RAID 5 array, the more likely you are going to
suffer from catastrophic loss when rebuilding the array). It's just that
I am more prone to use cheaper hard drives, cheaper controllers and at
some point I have to have the extra margin for safety. On top of that,
it seems to me that RAID 10 smokes RAID 5 on every performance
characteristic my clients are likely to use (and yes, that means
databases). RAID 5 primarily satisfies the needs for maximum storage for
the least amount of money and that was rarely what I need in a storage
system for a server.


This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

More information about the Linux-PowerEdge mailing list