Optimizing disk i/o performance

Richard Andrews randrews at pelmorex.com
Tue Jul 13 14:59:40 CDT 2010


Thanks John,

I'll give it a try and see what comes of it.

Richard Andrews
Systems Administrator - IT Operations
Pelmorex Media Inc.

> -----Original Message-----
> From: linux-poweredge-bounces at dell.com [mailto:linux-poweredge-
> bounces at dell.com] On Behalf Of linux-poweredge-request at dell.com
> Sent: Tuesday, July 13, 2010 1:00 PM
> To: linux-poweredge at dell.com
> Subject: Linux-PowerEdge Digest, Vol 73, Issue 21
> 
> Send Linux-PowerEdge mailing list submissions to
> 	linux-poweredge at dell.com
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://lists.us.dell.com/mailman/listinfo/linux-poweredge
> or, via email, send a message with subject or body 'help' to
> 	linux-poweredge-request at dell.com
> 
> You can reach the person managing the list at
> 	linux-poweredge-owner at dell.com
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linux-PowerEdge digest..."
> 
> 
> Today's Topics:
> 
>    1. amazon ec2 hosting? (Doug Simmons)
>    2. RE: Optimizing disk i/o performance (John LLOYD)
>    3. Re: amazon ec2 hosting? (Simon Waters)
>    4. OMSA & Suse Enterprise 11 (Ryan Kish)
>    5. ssh nologin DRAC 4/P (Benito Lopera)
> 
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Tue, 13 Jul 2010 10:28:50 -0500
> From: "Doug Simmons" <dsimmons at lib.siu.edu>
> Subject: amazon ec2 hosting?
> To: <linux-poweredge at dell.com>
> Message-ID:
> 	<797303B748C24D40ABDCCC6416E356A403094879 at libsrvxch2.cd.ds.siu.edu>
> Content-Type: text/plain; charset="us-ascii"
> 
> We're basically a poweredge shop, or are becoming so as the sun
> equipment ages. But, it has been proposed that we consider cloud options
> going forward. After an extensive research session spanning all of
> fifteen or twenty minutes, I'm not sure we'd save much going to hosted
> virtual instances and storage like with Amazon EC2.
> 
> 
> 
> Do any of the list members have any experience with this service?
> 
> 
> 
> Thanks,
> 
> 
> 
> Doug Simmons
> 
> Procedures and Systems Analyst II
> 
> Morris Library Systems
> 
> SIUC
> 
> 
> 
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://lists.us.dell.com/pipermail/linux-
> poweredge/attachments/20100713/874df4ea/attachment-0001.htm
> 
> ------------------------------
> 
> Message: 2
> Date: Tue, 13 Jul 2010 08:31:52 -0700
> From: John LLOYD <jal at mdacorporation.com>
> Subject: RE: Optimizing disk i/o performance
> To: "linux-poweredge at dell.com" <linux-poweredge at dell.com>
> Message-ID:
> 	<D4ADCF07769283468C6E62E379B586571036F08D65 at EVSYVR1.ds.mda.ca>
> Content-Type: text/plain; charset="us-ascii"
> 
> > Date: Tue, 13 Jul 2010 10:40:06 -0400
> > From: Richard Andrews <randrews at pelmorex.com>
> > Subject: Optimizing disk i/o performance
> > To: "'linux-poweredge at dell.com'" <linux-poweredge at dell.com>
> > Message-ID:
> > 	<496A9F867C284A47A25BF0FB1A1EF81930DC7D011B at exchange-mb-
> > 01.office.pelmorex.com>
> >
> > Content-Type: text/plain; charset="us-ascii"
> >
> > Hello,
> >
> > I'm investigating the most ideal settings to improve the performance of
> > a glusterfs implementation.  Are there optimal settings for the
> > following parameters with respect to the Perc 6/I rev.122 controllers?
> >
> > /sys/block/sda/queue/nr_requests
> > Blockdev Readahead value (blockdev -getra /dev/sda)
> >
> > Currently the default values for CentOS are 128 and 256 respectively.
> > I understand that increasing nr_requests will have a memory usage
> > impact and changing the readahead parameters may have a memory usage
> > impact.
> >
> > Regards,
> >
> > Richard Andrews
> > Pelmorex Media Inc.
> 
> 
> 
> we typically setup a blockdev --setra of about 4k or 8k blocks per
> physical device.  memory is cheap.  It usually makes a better-than-50%
> improvement in IO of large files.
> 
> Here is an example.  sdb is a 6-way raid set, so it gets 1k per disk.
> This is a SLES10 system.  RH/CentOS may vary in where or how you do this.
> 
> # cat /etc/init.d/boot.local
> #! /bin/sh
> 
> blockdev --setra 4096 /dev/sdc
> blockdev --setra 4096 /dev/sde
> blockdev --setra 4096 /dev/sdf
> blockdev --setra 4096 /dev/sdg
> blockdev --setra 4096 /dev/sdh
> blockdev --setra 4096 /dev/sdi
> 
> blockdev --setra 6144 /dev/sdb
> 
>  echo noop > /sys/block/sdc/queue/scheduler
>  echo noop > /sys/block/sde/queue/scheduler
>  echo noop > /sys/block/sdf/queue/scheduler
>  echo noop > /sys/block/sdg/queue/scheduler
>  echo noop > /sys/block/sdh/queue/scheduler
>  echo noop > /sys/block/sdi/queue/scheduler
> 
> 
> Resetting the scheduler means that Linux and the RAID controller are not
> competing for who has the best IO scheduler.  Since you cannot turn off
> RAID controller scheduling (except by avoiding RAID) we keep Linux out of
> the picture.
> 
> 
> --John
> 
> 
> 
> 
> 
> ------------------------------
> 
> Message: 3
> Date: Tue, 13 Jul 2010 17:29:15 +0100
> From: Simon Waters <simonw at zynet.net>
> Subject: Re: amazon ec2 hosting?
> To: linux-poweredge at dell.com
> Message-ID: <201007131729.15598.simonw at zynet.net>
> Content-Type: text/plain;  charset="utf-8"
> 
> On Tuesday 13 July 2010 16:28:50 Doug Simmons wrote:
> >
> > Do any of the list members have any experience with this service?
> 
> We are pondering a different cloud system based on XEN offering similar on
> a
> smaller scale than Amazon. Pricing is slightly cheaper, but not much in
> it.
> They charge for disk I/O bandwidth which required some quick checks on
> what
> we do, perils of expensive external storage systems.
> 
> In our case we've done a little more than 20 minutes research, but the
> conclusions aren't clear cut.
> 
> I think the motivation for moving to a cloud system is probably not solely
> financial, one has to expect that the option to migrate instances will add
> robustness, the option to add systems flexibility, and the scalability are
> the factors one is looking for. Also these systems have large robust
> storage
> systems which are beyond the pockets of most businesses.
> 
> My concern still is availability. Amazon EC2 I hear good things about. A
> couple of other providers we looked at are not bad, most don't have a
> track
> record you could point at and think "that is better than a decent hosting
> provider" and dedicated hardware - we routinely get 1 year plus uptimes on
> DELL hardware boxes that are a decade old - when we see top cloud
> providers
> off air for a couple of days at a time it doesn't convince me.
> 
> I've yet to use a virtualization product that didn't have
> virtualization "bugs", i.e. errors, downtime, or problems due to the
> virtualization process. The OpenVZ stuff we tried didn't memory map files
> correctly, one provider using XEN migrated our instance to different
> hardware
> for maintenance and when it woke up milliseconds later it was 1914
> (Postfix
> said it wasn't doing anything till the date was plausible - which was
> probably wise - but did nothing for availability). What I've read of EC2
> is
> that it is rather idiosyncratic compared to more recent virtualization
> offerings elsewhere, on the other hand they seem to have been free of
> major
> problems for a while.
> 
> The XEN provider we've looked at most closely seem promising, they seem to
> have resolved a lot of issues with their earlier system, but I have
> concerns
> at scalability because they limit the available RAM to each instance
> somewhat
> and when you have hundreds of gigabytes of data it would be nice to know
> you
> could scale RAM to something more substantial if needed, and serious disk
> space is expensive in these storage arrays. And they have zero track
> record
> on their new system because it is new. On the other hand I'm not THAT
> scared
> of virtualization just want to test it the whole way, which is time
> consuming.
> 
> One provider looked solid, but the pricing was high, and the
> marketing/emphasis was all to hosting enterprise servers rather than web
> servers (which is what we want).
> 
> Anyone gone with a dual provider strategy - where they create instances at
> different cloud providers, and fail across on long outages? As that would
> address my key concerns about reliability, but it looks expensive and
> complex
> to implement.
> 
> If there were relatively cheap network storage systems around with
> suitable
> characteristics I'd be tempted to build our own XEN hosting system to get
> the
> advantages of virtualisation without the pain of a third party
> relationship.
> But I haven't read that bit of 'the Book of XEN' yet, and suspect the
> answer
> is "no".
> 
> 
> 
> ------------------------------
> 
> Message: 4
> Date: Tue, 13 Jul 2010 10:47:26 -0600
> From: "Ryan Kish" <ryan.kish at readytalk.com>
> Subject: OMSA & Suse Enterprise 11
> To: <linux-poweredge at lists.us.dell.com>
> Message-ID:
> 	<C9CF37EDFB8B76489782E6F24A02CD68068A3384 at apollo.readytalk.com>
> Content-Type: text/plain; charset="us-ascii"
> 
> Hi,
> 
> I am trying to track down OMSA packages for SLES 11 (i386).
> Unfortunately, it appears the only packages available are x86_64. Does
> anyone know where I can obtain OMSA for SLES11 on i386?
> 
> Thanks!
> Ryan
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://lists.us.dell.com/pipermail/linux-
> poweredge/attachments/20100713/0235efda/attachment-0001.htm
> 
> ------------------------------
> 
> Message: 5
> Date: Tue, 13 Jul 2010 18:53:18 +0200
> From: Benito Lopera <glistadell at gmail.com>
> Subject: ssh nologin DRAC 4/P
> To: linux-poweredge at dell.com
> Message-ID:
> 	<AANLkTinfvWAOv7HJHyTGoYJjbE0-PLq_KnOuBaItOnVW at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Hi, i have a problem with the ssh access on the DRAC of my dell R200
> servers. I have a script to make a soft shutdown doing "stop system1" en
> the
> DRAC shell, but the problem is sometimes the DRAC denied the ssh enter.
> For
> example:
> 
> root at local# ssh -l root 192.168.0.23
> root at 192.168.0.23's password:
> 
> Dell Remote Access Controller 4/P (DRAC 4/P)
> Firmware Version 1.71 (Build 02.19)
> [root]# exit
> 
> Connection to 192.168.0.23 closed.
> root at local#
> 
> Everything is OK, but i try 2 min later and this is the result:
> 
> root at local# ssh -l root 192.168.0.23
> Received disconnect from 192.168.0.23: 11: Logged out.
> root at local#
> 
> The password is the same in both cases. I read on the manual that only one
> SSH session is supported at any given time but i know that nobody is
> logged
> in, somebody knows the cause of this problem?
> 
> Thanks.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://lists.us.dell.com/pipermail/linux-
> poweredge/attachments/20100713/7352546c/attachment-0001.htm
> 
> ------------------------------
> 
> _______________________________________________
> Linux-PowerEdge mailing list
> Linux-PowerEdge at dell.com
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq
> 
> End of Linux-PowerEdge Digest, Vol 73, Issue 21
> ***********************************************



More information about the Linux-PowerEdge mailing list