Pathetic RAID 5 NFS performance on Dell PE 2850 with PERC 4e/di

Irwan Hadi ihblist at gmail.com
Mon Jul 18 20:00:01 CDT 2005


On 7/18/05, Anup Gangwar <agangwar at calypto.com> wrote:
> Hi Irwan,
> 
>         Thanks for the response. We have the Seagate drives. Here is the
> model number:
> 
>  SEAGATE   Model: ST3146807LC

It seems you got the Seagate 10K.6 hard drives, which is the older version.

> 
> I have taken this directly from the megaraid driver info. Are the Maxtor
> drives 10K rpm or 15K rpm.

We use 10k RPM for the Maxtor drives. Anyway for our Seagate drives,
the model is: ST3146707LC (Seagate 10K.7).

I'm starting to wonder if there is some incompatibility between
Seagate drives and latest controllers from LSI (with its latest
drivers and firmwares), because I have tried both the PERC 4e/Di and
LSI Logic 320-4X, each with latest firmware and driver and still got
very poor performance.

> 
> Thanks for all the help.
> 
> Regards,
> 
> Anup
> 
> 
> On Sun, 17 Jul 2005, Irwan Hadi wrote:
> 
> > Hi,
> >
> > You said that you have a 5 X 146 GB hard drives. Can I know what is
> > the brand of these hard drives?
> > I'm having a very horribly performance issue with my PE 2800 which
> > uses PERC 4e/di and 8 X 146 GB Seagate drives. I have tried to change
> > the controller to LSI 320-4X but no dice, the problem still persists,
> > while my other server a PE2850 with PERC 4e/di and 4 X 146 GB Maxtor
> > drives is working just fine.
> >
> > On 6/30/05, Anup Gangwar <agangwar at calypto.com> wrote:
> > > Hi Olle,
> > >
> > >         Thanks for the response. We already have the async option for all
> > > exported directories. As I said previously this is not and NFS issue, but
> > > a RAID issue, even a simple build over NFS shoots the load on the server.
> > > Also, most of the time the process is in I/O wait state.
> > >
> > > Regards,
> > >
> > > Anup
> > >
> > > On Thu, 30 Jun 2005, Olle Liljenzin wrote:
> > >
> > > > What kind of builds are you doing?
> > > >
> > > > I don't know if it is relevant in your case, but we have seen very poor
> > > > performance when linking big binaries over NFS on linux. The reason
> > > > seems to be that newer linux versions permit the clients to sync over
> > > > NFS, which I think is different from most other NFS implementations.
> > > > Adding 'async' in /etc/exports on the server changes back to traditional
> > > > behaviour.
> > > >
> > > > /Olle
> > > >
> > > >
> > > > > Hi All,
> > > > >
> > > > >     We have a PE 2850 being used as NFS server for 10 desktops and
> > > > > four compute servers. The user homes are being served by this NFS server.
> > > > > Typical workload is users doing large builds and file copies. The system
> > > > > configuration is as follows:
> > > > >
> > > > > 1. Processor 3.0GHz HT
> > > > > 2. 1 GB RAM
> > > > > 3. 5 146GB 10K rpm disks
> > > > > 4. Onboard RAID PERC 4e/Di (with latest firmware)
> > > > > 5. Running RHEL 3 with Kernel 2.4.21-32.0.1.ELsmp
> > > > > 6. Four disks are configured into one logical drive and the fifth one is a
> > > > >    hot-spare
> > > > >
> > > > > MegaRaid2 config (from /proc/megaraid) reports
> > > > >
> > > > > -------------------------------------------------------------------------
> > > > > v2.10.10.1 (Release Date: Thu Jan 27 16:19:44 EDT 2005)
> > > > > PERC 4e/Di
> > > > > Controller Type: 438/466/467/471/493/518/520/531/532
> > > > > Controller Supports 40 Logical Drives
> > > > > Controller capable of 64-bit memory addressing
> > > > > Controller using 64-bit memory addressing
> > > > > Base = f8841000, Irq = 38, Initial Logical Drives = 1, Channels = 2
> > > > > Version =521S:H430, DRAM = 256Mb
> > > > > Controller Queue Depth = 254, Driver Queue Depth = 126
> > > > > support_ext_cdb    = 1
> > > > > support_random_del = 1
> > > > > boot_ldrv_enabled  = 1
> > > > > boot_ldrv          = 0
> > > > > boot_pdrv_enabled  = 0
> > > > > boot_pdrv_ch       = 0
> > > > > boot_pdrv_tgt      = 0
> > > > > quiescent          = 0
> > > > > has_cluster        = 0
> > > > >
> > > > > Module Parameters:
> > > > > max_cmd_per_lun    = 63
> > > > > max_sectors_per_io = 128
> > > > > -------------------------------------------------------------------------
> > > > >
> > > > > The problem is that we are seeing high IO wait times using this setup.
> > > > > This leads to very poor NFS performance. Even with a mild NFS load the IO
> > > > > wait time is as high as 80%. I have tried tuning the RHEL 3 VM using the
> > > > > values specified in:
> > > > >
> > > > >     http://lists.us.dell.com/pipermail/linux-poweredge/2005-June/021155.html
> > > > >
> > > > > to no avail. The performance is almost the same.
> > > > >
> > > > > Googling around revealed that PERC 4e/Di is not a bad controller. So, in
> > > > > principle it is possible to obtain better performance from this setup. Can
> > > > > someone suggest any possible solutions. Will RHEL 4 give better
> > > > > performance, what about FedoraCore.4?
> > > > >
> > > > > Any help would be greatly appreciated.
> > > > >
> > > > > Thanks and Regards,
> > > > >
> > > > > Anup
> > > >
> > > >
> > >
> > > _______________________________________________
> > > Linux-PowerEdge mailing list
> > > Linux-PowerEdge at dell.com
> > > http://lists.us.dell.com/mailman/listinfo/linux-poweredge
> > > Please read the FAQ at http://lists.us.dell.com/faq
> > >
> >
> 
>



More information about the Linux-PowerEdge mailing list