Per-process maximum memory utilization??? BSmith at
Wed Jun 12 08:18:01 CDT 2002

Hi Ed,

I think the max size for a single process is 3 GB.  I've seen problems with
our processes if they grow beyond 2 GB in size (Intel/Dialogic voice card
driver problems).  (I'm not sure how you are determining the size of your
process, but I use "ps -el" and look at the "SIZE" column to see how large
a process is. The SIZE column is reporting the process size in pages I
believe, and these pages are 4096 bytes each. So you have to multiply the
value in the SIZE column by 4096 to get the actual size of the process in
bytes.  I also think the "ps -el"'s SIZE column is reporting on total
virtual memory size which is including memory that has been allocated but
not used completely (ie: malloc()).  This may or may not be different from
what the "top" command reports for memory usage of a process.)

The hang problem you report (in sigsuspend) sounds a little like what we've
seen with pthreads (again, due to the Intel/Dialogic API which uses
pthreads) and child processes. In our case we are launching new processes
(that also contain pthreads) via fork()/exec() and would occasionally see
the child process hang on the exit().  I think gdb was reporting the same
sigsuspend hang as you state.  By changing "exit()" to "_exit()", the
problems have been greatly reduced or eliminated. Unfortunately this may be
of no help to you since you don't mention child processes.  Sorry I don't
know more - I'm learning this as I go...

- Brian Smith

                    Ed Griffin                                                                                            
                    <edg at>            To:     linux-poweredge at                                          
                    Sent by:                    cc:                                                                       
                    linux-poweredge-admi        Subject:     Per-process maximum memory utilization???                    
                    n at                                                                                            
                    06/11/02 10:04 PM                                                                                     

Does anyone know if there is a maximum size a process can grow to before
the system says "hey that's enough?"  The reason I ask is we are seeing
some bizarre behavior on some Dell PE4600's with 4GB RAM 8GB swap (Redhat
7.2, kernel 2.4.9-34smp), where a process grows to 1.5GB and then dies.  I
have attached some of the developer's comments...

It reports that it is out of memory in our log file and then hangs (appears

to be waiting for child threads to return but they are stuck on a
sigsuspend call) so I can see the size of the process before it dies.  It's

"only" 1.5G and there is 4G of memory on the box.

We know we have a leak in the code but we are not sure why it locks up
after using 1.5GB of memory when plenty more is available (swap hasn't even

been touched).

Here is the output of the limit command (run as the user running the
process not root)

cputime         unlimited
filesize        unlimited
datasize        unlimited
stacksize       8192 kbytes
coredumpsize    unlimited
memoryuse       unlimited
descriptors     1024
memorylocked    unlimited
maxproc         15103
openfiles       1024

Also what is the maximum number of threads that can be forked off a single

I must admit that I am just a lowly sys. admin. not a software engineer, so

any help is appreciated, thanks in advance.


Linux-PowerEdge mailing list
Linux-PowerEdge at
Please read the FAQ at or search the list
archives at

More information about the Linux-PowerEdge mailing list