Load info

Rechenberg, Andrew ARechenberg at shermanfinancialgroup.com
Tue Oct 7 08:45:00 CDT 2003


I would look at disk I/O as Steve suggested.  We had some really bad
contention on our disk subsystem and that caused some REALLY high loads
on our system.  Once we upped the number of spindles on our disk
subsystem, our loads lowered significantly.

Here are our specs now:

- Dell PE6600
- 4 - 1.4GHz Xeon HT enabled
- 8GB RAM (limited to 6GB due to a bug in the Linux VM and it's
interaction with LVM)
- 56 18GB, 15K SCSI discs in a Linux software RAID10 array in 4 PV220S
- Red Hat 7.3 running slightly customized 2.4.18-27bigmem kernel

Our load rarely goes over 8.00 now and is usually around 3.00-4.00.

Hope this helps,
Andy.

On Tue, 2003-10-07 at 08:17, Steve Jump (SD) wrote:
> Hi Nitin,
>  
> Have you checked the I/O onto your disk subsystem?  Could be your RAID
> is rebuilding a drive, or handling another error, and the performance
> has dropped, or maybe you have just too much I/O onto your nfs server.
>  
> Your CPUs seem to be pretty idle, but that may just mean that they are
> waiting for a space to write or read from disk. High load average and
> low cpu is often a sign that there is not enough I/O to handle the
> demand.
>  
> If there is nothing wrong with the disks, and if you haven't looked
> try  iostat  to see if you are trying to do too much with your disks,
> or check your nfs to see if you have too many clients.  (iostat is in
> the sysstat package).
>  
>  
> Steve Jump
>         -----Original Message-----
>         From: linux-poweredge-admin at dell.com
>         [mailto:linux-poweredge-admin at dell.com]On Behalf Of Nitin
>         Gizare
>         Sent: 07 October 2003 13:10
>         To: linux-poweredge at dell.com
>         Subject: Load info
>         
>         
>         HI all
>          
>         I have strange thing happening 
>          
>         227 processes: 221 sleeping, 1 running, 5 zombie, 0 stopped
>         CPU0 states:  0.1% user,  5.2% system,  0.0% nice, 93.2% idle
>         CPU1 states:  0.1% user,  3.2% system,  0.0% nice, 95.2% idle
>         Mem:  2059472K av, 1989012K used,   70460K free,     784K
>         shrd,    7716K buff
>         Swap: 2048248K av,  440484K used, 1607764K
>         free                 1411684K cached
>         4:27pm  up 114 days, 19:15,  1 user,  load average: 69.05,
>         69.15, 69.19
>         
>           PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME
>         COMMAND
>         19848 cdsmgr    14   0  1280 1280   944 R     2.0  0.0   0:00
>         top
>         21084 root      14   0     0    0     0 SW    1.7  0.0 223:08
>         nfsd
>         21087 root      12   0     0    0     0 SW    1.0  0.0 220:34
>         nfsd
>         21081 root      11   0     0    0     0 SW    0.6  0.0 221:58
>         nfsd
>         21083 root      11   0     0    0     0 SW    0.6  0.0 220:53
>         nfsd
>         21086 root      11   0     0    0     0 SW    0.6  0.0 221:04
>         nfsd
>         21082 root      10   0     0    0     0 SW    0.3  0.0 221:07
>         nfsd
>         21085 root      10   0     0    0     0 SW    0.3  0.0 222:03
>         nfsd
>         18627 root       0 -20  1908 1852  1224 S <   0.3  0.0 100:38
>         lim
>             1 root       8   0   120   72    52 S     0.0  0.0   2:36
>         init
>             2 root       9   0     0    0     0 SW    0.0  0.0   0:00
>         keventd
>             3 root      19  19     0    0     0 SWN   0.0  0.0   0:03
>         ksoftirqd_CPU0
>             4 root      19  19     0    0     0 SWN   0.0  0.0   0:04
>         ksoftirqd_CPU1
>             5 root       9   0     0    0     0 SW    0.0  0.0  1014m
>         kswapd
>             6 root       9   0     0    0     0 SW    0.0  0.0 353:06
>         kreclaimd
>             7 root       9   0     0    0     0 SW    0.0  0.0 107:14
>         bdflush
>             8 root       9   0     0    0     0 SW    0.0  0.0 152:17
>         kupdated
>             9 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00
>         mdrecoveryd
>            80 root       9   0     0    0     0 SW    0.0  0.0   0:00
>         khubd
>           610 root       9   0   348  336   288 S     0.0  0.0   3:22
>         syslogd
>           615 root       9   0   864  164   124 S     0.0  0.0   0:10
>         klogd
>           633 rpc        9   0   280  260   188 S     0.0  0.0   1:58
>         portmap
>           776 root       9   0   388  308   232 S     0.0  0.0   0:00
>         ypbind
>           778 root       9   0   388  308   232 S     0.0  0.0   7:08
>         ypbind
>           779 root       9   0   388  308   232 S     0.0  0.0   0:00
>         ypbind
>           780 root       9   0   388  308   232 S     0.0  0.0   3:40
>         ypbind
>           864 root       9   0   404  364   312 S     0.0  0.0   0:26
>         automount
>           948 root       9   0   412  372   344 S     0.0  0.0   0:31
>         automount
>          1024 ident      9   0   172   28    16 S     0.0  0.0   0:00
>         identd
>          1027 ident      9   0   172   28    16 S     0.0  0.0   7:41
>         identd
>          1028 ident      9   0   172   28    16 S     0.0  0.0   0:00
>         identd
>          1029 ident      9   0   172   28    16 S     0.0  0.0   0:00
>         identd
>          1030 ident      9   0   172   28    16 S     0.0  0.0   0:00
>         identd
>          1071 root       9   0   192    4     0 S     0.0  0.0   0:00
>         sshd
>         
>         I am unable to figure out why load is os high 
>         OS red hat 7.1. 
>         Pls help to figure out what is making m/c to gor so high load.
>          
>         Rgds
>         Nitin
>          
>         **************************Disclaimer************************************
>         
>         Information contained in this E-MAIL being proprietary to Wipro Limited is 
>         'privileged' and 'confidential' and intended for use only by the individual
>          or entity to which it is addressed. You are notified that any use, copying 
>         or dissemination of the information contained in the E-MAIL in any manner 
>         whatsoever is strictly prohibited.
>         
>         ***************************************************************************
-- 

Regards,
Andrew Rechenberg
Infrastructure Team, Sherman Financial Group
513.707.3809




More information about the Linux-PowerEdge mailing list