performance bottleneck in Linux MD RAID-1

Bond Masuda bond.masuda at jlbond.com
Wed Jul 14 22:32:57 CDT 2010


Hi Everyone,

I'm wondering if some of the gurus around here might be able to help me
out. We have a PE2970 with two PERC 6/E, each PERC6/E is connected via
single SAS cable to an MD1000 with 15x 1TB Hitachi SATA 7.2K drives. We
have each MD1000 setup in RAID-10 with 14 drives and 1 hot spare. Within
Linux, we mirror the two MD1000's with Linux MD RAID-1 as /dev/md0. On
top of /dev/md0, we have LVM2 and then XFS on the LV. The reason for the
LVM2 is to take snapshots (we reserve about 10% of space in VG for it)

We're seeing a performance bottleneck of about 200MBytes/sec sequential
writes when testing with iozone. We were expecting with 7x effective
spindles on the RAID-10, to get about ~350MBytes/sec sustained writes
for sequential access.

After trying out several combinations of things, we found that if we
remove the Linux MD software RAID layer, and just LVM2 on top of
the /dev/sdc (the vdisk as presented by the PERC 6/E RAID-10), we get
about 340MBytes/sec sequential writes. If we put XFS directly on top
of /dev/sdc1, we get about the same 340MBytes/sec. So, we can get our
anticipated performance of about 350MB/s only when we don't use the MD
RAID-1.

Since both MD1000s are connected via separate PERC 6/E, we didn't think
the MD RAID-1 would cause >40% performance loss... 

We even tried to degrade the MD RAID-1 and see if writing only to one of
the mirrors would improve performance. It did NOT.. .still 200MB/s. It
almost seems like Linux MD layer has a performance cap at around
200MB/s.

Has anyone encountered this and have suggestions to remove this
bottleneck? Any advice would be appreciated.

Thanks,
-Bond



More information about the Linux-PowerEdge mailing list