Optimizing disk i/o performance

John LLOYD jal at mdacorporation.com
Tue Jul 13 10:31:52 CDT 2010

> Date: Tue, 13 Jul 2010 10:40:06 -0400
> From: Richard Andrews <randrews at pelmorex.com>
> Subject: Optimizing disk i/o performance
> To: "'linux-poweredge at dell.com'" <linux-poweredge at dell.com>
> Message-ID:
> 	<496A9F867C284A47A25BF0FB1A1EF81930DC7D011B at exchange-mb-
> 01.office.pelmorex.com>
> Content-Type: text/plain; charset="us-ascii"
> Hello,
> I'm investigating the most ideal settings to improve the performance of
> a glusterfs implementation.  Are there optimal settings for the
> following parameters with respect to the Perc 6/I rev.122 controllers?
> /sys/block/sda/queue/nr_requests
> Blockdev Readahead value (blockdev -getra /dev/sda)
> Currently the default values for CentOS are 128 and 256 respectively.
> I understand that increasing nr_requests will have a memory usage
> impact and changing the readahead parameters may have a memory usage
> impact.
> Regards,
> Richard Andrews
> Pelmorex Media Inc.

we typically setup a blockdev --setra of about 4k or 8k blocks per physical device.  memory is cheap.  It usually makes a better-than-50% improvement in IO of large files.

Here is an example.  sdb is a 6-way raid set, so it gets 1k per disk.  This is a SLES10 system.  RH/CentOS may vary in where or how you do this.

# cat /etc/init.d/boot.local
#! /bin/sh

blockdev --setra 4096 /dev/sdc
blockdev --setra 4096 /dev/sde
blockdev --setra 4096 /dev/sdf
blockdev --setra 4096 /dev/sdg
blockdev --setra 4096 /dev/sdh
blockdev --setra 4096 /dev/sdi

blockdev --setra 6144 /dev/sdb

 echo noop > /sys/block/sdc/queue/scheduler
 echo noop > /sys/block/sde/queue/scheduler
 echo noop > /sys/block/sdf/queue/scheduler
 echo noop > /sys/block/sdg/queue/scheduler
 echo noop > /sys/block/sdh/queue/scheduler
 echo noop > /sys/block/sdi/queue/scheduler

Resetting the scheduler means that Linux and the RAID controller are not competing for who has the best IO scheduler.  Since you cannot turn off RAID controller scheduling (except by avoiding RAID) we keep Linux out of the picture.


More information about the Linux-PowerEdge mailing list