Buffer overflow creating 92 drive RAID60 array

Lloyd Brown lloyd_brown at byu.edu
Wed Mar 7 12:14:00 CST 2012

This document might be a good place to start, if you're trying to
optimize performance, etc.  Its basically the explanation of the setup
of the NSS storage appliance that Dell sells, which is really just RHEL,
XFS, and NFS.


Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University

On 03/07/2012 11:04 AM, John Lloyd wrote:
>> Hello,
>> I'm trying to create a 92 drive RAID60 array spanning 4 MD1220 shelves
>> on a R710 running Debian 6 (Squeeze).
> Remarkable.
>> Unfortunately, I get buffer
>> overflow in omconfig when I try to do this.
> Somehow this is not surprising.
> Based on my understanding, a RAID-60 array would mean two independent RAID-6 arrays with blocks interleaved (RAID-0).  Each RAID-6 array would be 46 disks, 2 of which are parity.  44 disks would be striped in parallel, with parity calculated on a 128kbyte stripe size.
> The unit of a stripe would be 128kB multiplied by 44 = about 5 megabytes of data.  This would not be so much of a problem but the 44 disks would be.  Each write (such as an inode update) would require 44 physical reads (assuming parity matches) and 46 physical writes.  Caching might help a bit but basically you would be waiting for 90+ physical IOs per logical update.  
> In my own experience the largest useful RAID set size is 8 data disks plus parity.   If 8 disks are not big enough then stripe and/or concatenate them with LVM or RAID-0 in your RAID controller or whatever.
> Note also: your file system needs to be carefully selected and tuned.  For example, XFS supports very large volumes well but has to be tuned to match the underlying stripe structure to get the best metadata performance.
> --John
> _______________________________________________
> Linux-PowerEdge mailing list
> Linux-PowerEdge at dell.com
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge

More information about the Linux-PowerEdge mailing list