[Linux-PowerEdge] Linux-PowerEdge Digest, Vol 102, Issue 6

John Lloyd jal at mdacorporation.com
Mon Nov 5 10:57:11 CST 2012

Gregory Gulik wrote:
> I am working on setting up a large RAID setup for a variety of 
> functions that aren't very performance critical. I'm trying to use LVM 
> on the filesystem to allow me to carve up the space for different 
> functions with the flexibility to adjust things later if necessary.
> However I'm finding the performance of the LVM based filesystem is 
> many times slower than the raw partition. I guess I expect some 
> overhead but not 10x.
> Specifics of my test configuration:
> Dell PowerEdge 1950 with 12G RAM
> Dell PERC 6/E RAID controller
> Dell MD1000 Controller
> 8 x Seagate 2TB SATA drives in RAID-6 config (otherwise using default 
> settings in OpenManage)
> Operating System is CentOS 6.3
> Total space available is 12TB and partitioned using parted as follows:
> Disk /dev/sdb: 12.0TB
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt

You have a couple of trade-offs here.  There was a bunch of (very good) advice about aligning stripe units and stripe widths.

Or you could buy 4 more 2TB SATA disks, run RAID-10, keep your 12TB total, and avoid the whole striping logic analysis thing and issues with performance.  RAID-10 is faster for more kinds of IO patterns than RAID-5 or RAID-6.    And when a drive breaks, the volume rebuild time is only a couple of hours not a couple of days.

The trade-offs of course are money vs performance vs effort to get the system running.

One other hint:  set the the IO readahead size.  The command blockdev --setra is another good thing to do for read IO.


More information about the Linux-PowerEdge mailing list