Buffer overflow creating 92 drive RAID60 array

Steff steff at steff.name
Wed Mar 7 12:32:42 CST 2012


On 7 March 2012 18:04, John Lloyd <jal at mdacorporation.com> wrote:
>
> Based on my understanding, a RAID-60 array would mean two independent RAID-6 arrays with blocks interleaved (RAID-0).  Each RAID-6 array would be 46 disks, 2 of which are parity.  44 disks would be striped in parallel, with parity calculated on a 128kbyte stripe size.
>
> The unit of a stripe would be 128kB multiplied by 44 = about 5 megabytes of data.  This would not be so much of a problem but the 44 disks would be.  Each write (such as an inode update) would require 44 physical reads (assuming parity matches) and 46 physical writes.  Caching might help a bit but basically you would be waiting for 90+ physical IOs per logical update.
>
> In my own experience the largest useful RAID set size is 8 data disks plus parity.   If 8 disks are not big enough then stripe and/or concatenate them with LVM or RAID-0 in your RAID controller or whatever.

I'm not sure that's wholly true. RAIDn0 means striping over a number
of RAIDns, not necessarily just two. One of the things I'd hope for
from a commercial RAID controller would be selecting the size of the
pools within a sensible maximum (based on intimate knowledge of what
the controller is capable of) and increasing the number to match. That
said, as I mentioned to the original poster, I'd be tempted just to
create the RAID6s individually and use md to stripe over them. I think
LVM2 will do striping too.

S



More information about the Linux-PowerEdge mailing list