Buffer overflow creating 92 drive RAID60 array

John Lloyd jal at mdacorporation.com
Wed Mar 7 12:04:05 CST 2012


> Hello,
> 
> I'm trying to create a 92 drive RAID60 array spanning 4 MD1220 shelves
> on a R710 running Debian 6 (Squeeze).

Remarkable.

> Unfortunately, I get buffer
> overflow in omconfig when I try to do this.

Somehow this is not surprising.

Based on my understanding, a RAID-60 array would mean two independent RAID-6 arrays with blocks interleaved (RAID-0).  Each RAID-6 array would be 46 disks, 2 of which are parity.  44 disks would be striped in parallel, with parity calculated on a 128kbyte stripe size.

The unit of a stripe would be 128kB multiplied by 44 = about 5 megabytes of data.  This would not be so much of a problem but the 44 disks would be.  Each write (such as an inode update) would require 44 physical reads (assuming parity matches) and 46 physical writes.  Caching might help a bit but basically you would be waiting for 90+ physical IOs per logical update.  

In my own experience the largest useful RAID set size is 8 data disks plus parity.   If 8 disks are not big enough then stripe and/or concatenate them with LVM or RAID-0 in your RAID controller or whatever.

Note also: your file system needs to be carefully selected and tuned.  For example, XFS supports very large volumes well but has to be tuned to match the underlying stripe structure to get the best metadata performance.

--John



More information about the Linux-PowerEdge mailing list