Per-process maximum memory utilization???

David Vos dvos12 at
Wed Jun 12 08:03:00 CDT 2002

On a 32bit system, there can only be 2^32 addressable bytes (4GB).  Intel
had kind of cheated and added a couple of extra bits that the operating
system can use, allowing a full 64GB, although any one process only has
access to 4GB at a time.

However, I do know that the kernel sets aside a sizable chunk of the
addressable memory space for loading system libraries.  So the code that
we write has access to less (I heard that half of the address space is
reserved for libraries).  If you need to, I might be able to dig up old
emails from people who say they edited the source code to change this
limit.  I also think that this limit would be highly dependant on the
kernel version, and I can see how a change to things like the vm could
improve your situation.

Also keep in mind that the stack runs out of memory a lot quicker than the
heap.  (I don't think you can create a static array larger than 4MB).

As far as the maximum number of threads goes, that is even more over my
head, but I'll take a swing.  From what I can gather, it depends first of
all on the library you are using such as pthreads.  Many libraries map
these threads to linux's lightweight proccesses, which are given 32-bit
identifiers.  So you could theoretically have 2^32 threads total amoung
all your processes at a given time (assuming there are no other limits,
probably a bad assumption).


On Tue, 11 Jun 2002, Ed Griffin wrote:

> Does anyone know if there is a maximum size a process can grow to before 
> the system says "hey that's enough?"  The reason I ask is we are seeing 
> some bizarre behavior on some Dell PE4600's with 4GB RAM 8GB swap (Redhat 
> 7.2, kernel 2.4.9-34smp), where a process grows to 1.5GB and then dies.  I 
> have attached some of the developer's comments...
> It reports that it is out of memory in our log file and then hangs (appears 
> to be waiting for child threads to return but they are stuck on a 
> sigsuspend call) so I can see the size of the process before it dies.  It's 
> "only" 1.5G and there is 4G of memory on the box.
> We know we have a leak in the code but we are not sure why it locks up 
> after using 1.5GB of memory when plenty more is available (swap hasn't even 
> been touched).
> Here is the output of the limit command (run as the user running the 
> process not root)
> cputime         unlimited
> filesize        unlimited
> datasize        unlimited
> stacksize       8192 kbytes
> coredumpsize    unlimited
> memoryuse       unlimited
> descriptors     1024
> memorylocked    unlimited
> maxproc         15103
> openfiles       1024
> Also what is the maximum number of threads that can be forked off a single 
> process?
> I must admit that I am just a lowly sys. admin. not a software engineer, so 
> any help is appreciated, thanks in advance.
> --Ed
> _______________________________________________
> Linux-PowerEdge mailing list
> Linux-PowerEdge at
> Please read the FAQ at or search the list archives at

More information about the Linux-PowerEdge mailing list