[Linux-PowerEdge] Linux-PowerEdge Digest, Vol 102, Issue 9

John Lloyd jal at mdacorporation.com
Mon Nov 5 19:12:33 CST 2012


Re this

>You use sunit+swidth or su+sw to specify your raid layout that you specified
>when making the raid (it's strip size and the width being the number of data disks).
>
>A good strip width (su*#data disks) is one that will easily fit in the 
>controller's cache.


My experience with XFS (several months tuning it for a rather wildly varying set of applications) led me to believe, and this is based on first-hand experience, that the XFS tuning of sunit+swidth greatly improves it's metadata processing but does nothing for file throughput.  Specifically, this kind of thing:

# mkfs ...  -d su=$((4*512))k,sw=4 -l version=2,su=$((4*512))k,lazy-count=1 ....

When tried on a fast flash disk, file create/delete rates went up by a lot, from a few hundred to thousands.  (Measured with bonnie++).

On the other hand, getting your stripe alignments just so does help a lot for throughput, as we've been discussing.

But the notion that stripe width should be defined by the characteristics of your controller is not correct in my view.  Stripe size should be, as a rule of thumb, different from the average IO size.  If IO size is 500 kB (observe with iostat), then stripe size should be 64k with 4 or 5 disks.  That way all disks participate in the IO in parallel.  If IO size is 8kB, stripe size of 64 to 128k is better as more likely IO will run concurrently to many disks.   Assuming the IO rate is high enough to keep many disks busy.

The archives of this list have other comments for 6E controllers; worth looking at.

--John



More information about the Linux-PowerEdge mailing list