[Linux-PowerEdge] Extremely poor performance with LVM vs. RAW disc

L. A. Walsh, Tlinx Solutions dell at tlinx.org
Mon Nov 5 07:39:12 CST 2012



Gregory Gulik wrote:
> I am working on setting up a large RAID setup for a variety of 
> functions that aren't very performance critical. I'm trying to use LVM 
> on the filesystem to allow me to carve up the space for different 
> functions with the flexibility to adjust things later if necessary.
>
> However I'm finding the performance of the LVM based filesystem is 
> many times slower than the raw partition. I guess I expect some 
> overhead but not 10x.
>
> Specifics of my test configuration:
> Dell PowerEdge 1950 with 12G RAM
> Dell PERC 6/E RAID controller
> Dell MD1000 Controller
> 8 x Seagate 2TB SATA drives in RAID-6 config (otherwise using default 
> settings in OpenManage)
> Operating System is CentOS 6.3
>
> Total space available is 12TB and partitioned using parted as follows:
> Disk /dev/sdb: 12.0TB
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
>
> Number  Start   End     Size     File system  Name   Flags
>  1      17.4kB  10.0TB  10000GB               data1  lvm
>  2      10.0TB  12.0TB  1999GB   ext4         data3
>   
>
>
> I then created a 2TB ext4 filesystem mounted on /data2 using /dev/sdb2 
> physical partition and created a 2TB logical volume using the LVM 
> partition and also formatted it with ext4 mounted on /data2
>
> Then when I compare basic read/write performance the difference is 
> shocking. Using:
> time sh -c "dd if=/dev/zero of=ddfile bs=8k count=2000000 && sync"
> and 
> time dd if=ddfile of=/dev/null bs=8k
> Below are the average of several attempts
>
> Raw partition:
> Read: 48.3 secs
> Write: 70.2 secs
>
> LVM partition:
> Read: 288.3 secs
> Write: 701.6 secs\
----
  Couple things are in play here.

1) you need to start your pv's on a stripe unit boundary for large I/O 
performance.
For optimal smaller I/O, you want to start it on a stripe-width 
boundary.   This is a parameter
you give to pvcreate. (data offset).  So I have 12 datax64k = 768k -- I 
start my 1st pv @ 768k.

Then you layer a vg on top of that -- it has a segment size -- you want 
that to be a multiple of
your stripwidth for optimal I/O (I messed that up.. it's a 
strip-multple, but not a stripe-width
boundary.... so I get lower perf on some small writes than I might 
otherwise.  Too much of a pain
to reformat.

Then your lv's have a start position and chunksize -- you want those to 
multiple out as well.

Then  your specify your strip width and size to as file system param (or 
at least you do on
file systems that support RAID -- like XFS)... I assume, since you are 
using ext4, that it also
has such params.

While the OS knows the layout of disks in the pv -- it doesn't know 
about the lvm layers on top of
that... so the file systems generally don't even try to guess about 
proper alignment -- that's all manual
AFAIK...(at least on linux)...

If you want to test disk speed -- you should pre-allocate the file as 
contiguous (you can
do this on xfs... dunnow about ext4)....then when you 'dd', use direct 
I/O -- and use
nocreat,notrunc options on 'dd', so you are reading/writing to the same 
area and aren't
exercising the file-system's ability to allocate space, nor the OS's 
buffer system.

Doing those things, should get you alot closer to your theoreticals...

Hopefully I explained enough...if not, feel free to ask more...

Linda

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.us.dell.com/pipermail/linux-poweredge/attachments/20121105/9e0c0f70/attachment.html 


More information about the Linux-PowerEdge mailing list