Load info

Keith Schincke kschin at rice.edu
Tue Oct 7 08:17:01 CDT 2003


For me this is usually the result of a zombied process waiting on a nfs 
share.  I have seen load average from the low 20's to several hundred 
with usually the same number of zombie processes. vmstat shows the 
computer mostly idle. Restarting my NFS services  corrects this in the 
short term. (In the long term, I have to look at why nfs barfs from time 
to time)

Keith

Nitin Gizare wrote:

> HI all
>  
> I have strange thing happening
>  
> 227 processes: 221 sleeping, 1 running, 5 zombie, 0 stopped
> CPU0 states:  0.1% user,  5.2% system,  0.0% nice, 93.2% idle
> CPU1 states:  0.1% user,  3.2% system,  0.0% nice, 95.2% idle
> Mem:  2059472K av, 1989012K used,   70460K free,     784K shrd,    
> 7716K buff
> Swap: 2048248K av,  440484K used, 1607764K free                 
> 1411684K cached
> *4:27pm  up 114 days, 19:15,  1 user,  load average: 69.05, 69.15, 69.19
> *
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
> 19848 cdsmgr    14   0  1280 1280   944 R     2.0  0.0   0:00 top
> 21084 root      14   0     0    0     0 SW    1.7  0.0 223:08 nfsd
> 21087 root      12   0     0    0     0 SW    1.0  0.0 220:34 nfsd
> 21081 root      11   0     0    0     0 SW    0.6  0.0 221:58 nfsd
> 21083 root      11   0     0    0     0 SW    0.6  0.0 220:53 nfsd
> 21086 root      11   0     0    0     0 SW    0.6  0.0 221:04 nfsd
> 21082 root      10   0     0    0     0 SW    0.3  0.0 221:07 nfsd
> 21085 root      10   0     0    0     0 SW    0.3  0.0 222:03 nfsd
> 18627 root       0 -20  1908 1852  1224 S <   0.3  0.0 100:38 lim
>     1 root       8   0   120   72    52 S     0.0  0.0   2:36 init
>     2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
>     3 root      19  19     0    0     0 SWN   0.0  0.0   0:03 
> ksoftirqd_CPU0
>     4 root      19  19     0    0     0 SWN   0.0  0.0   0:04 
> ksoftirqd_CPU1
>     5 root       9   0     0    0     0 SW    0.0  0.0  1014m kswapd
>     6 root       9   0     0    0     0 SW    0.0  0.0 353:06 kreclaimd
>     7 root       9   0     0    0     0 SW    0.0  0.0 107:14 bdflush
>     8 root       9   0     0    0     0 SW    0.0  0.0 152:17 kupdated
>     9 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 mdrecoveryd
>    80 root       9   0     0    0     0 SW    0.0  0.0   0:00 khubd
>   610 root       9   0   348  336   288 S     0.0  0.0   3:22 syslogd
>   615 root       9   0   864  164   124 S     0.0  0.0   0:10 klogd
>   633 rpc        9   0   280  260   188 S     0.0  0.0   1:58 portmap
>   776 root       9   0   388  308   232 S     0.0  0.0   0:00 ypbind
>   778 root       9   0   388  308   232 S     0.0  0.0   7:08 ypbind
>   779 root       9   0   388  308   232 S     0.0  0.0   0:00 ypbind
>   780 root       9   0   388  308   232 S     0.0  0.0   3:40 ypbind
>   864 root       9   0   404  364   312 S     0.0  0.0   0:26 automount
>   948 root       9   0   412  372   344 S     0.0  0.0   0:31 automount
>  1024 ident      9   0   172   28    16 S     0.0  0.0   0:00 identd
>  1027 ident      9   0   172   28    16 S     0.0  0.0   7:41 identd
>  1028 ident      9   0   172   28    16 S     0.0  0.0   0:00 identd
>  1029 ident      9   0   172   28    16 S     0.0  0.0   0:00 identd
>  1030 ident      9   0   172   28    16 S     0.0  0.0   0:00 identd
>  1071 root       9   0   192    4     0 S     0.0  0.0   0:00 sshd
> I am unable to figure out why load is os high
> OS red hat 7.1.
> Pls help to figure out what is making m/c to gor so high load.
>  
> Rgds
> Nitin
>  
>
>**************************Disclaimer************************************
>
>Information contained in this E-MAIL being proprietary to Wipro Limited is 
>'privileged' and 'confidential' and intended for use only by the individual
> or entity to which it is addressed. You are notified that any use, copying 
>or dissemination of the information contained in the E-MAIL in any manner 
>whatsoever is strictly prohibited.
>
>***************************************************************************
>        
>




More information about the Linux-PowerEdge mailing list