> I have a question regarding some strangeness that was noted on
> a Linux machine that is being used as a compute-server. It's a
> Pentium Pro 200, 512M of RAM, 256K cache. The process(es) that is
> being run on this machine will take up approximately 445M of RAM.
> These include C programs that will malloc sizes between 512k at once
> and 200M at once. Thus, it is a relatively small number of mallocs
> performed to grab a large amount of space. However, once it reaches
> 445M of space or so, processes start swapping out, and it is noted
> that the remaining RAM, about 70M, is being tied up. This is
> happening while running under X, which initially consumes less than
> 10M of RAM. There is only 1 user on the system, and it is not
> networked to anything else that would allow any other user in.
> The owner of this machine wants to purchase another 512M for
> it, but would be hesitant of it if 10%-15% of the RAM will be tied up
> doing something else. Does anybody know what that 70M of ram is being
> used for, and whether or not it would stay at 70M or jump to about
> 150,if total ram was increased to 1G?
Page tables are the only thing which would grow as memory grew. They
swappable either and for a given work load are a fixed overhead. Think of
like directories on a disk system (why can't I save 1000000 1Kb files on
a 1Gb disk ?)
You can calculate the total size of the page tables required for a given
virtual memory size(TVM) (ie all processes added together) by:
All sizes in 1K chunks,required memory for page tables = ((TVM/4)/128 +1
(I probably have my numbers wrong here, each page is 4K and you can fit
128 page descriptors
in a 4K page.) There is also a much smaller overhead for the control
structure which lists page
table descriptors, the size of which depends on the way memory was
allocated, however I believe
Linux simply uses one for the system and one each for each process, so
its going to be only a few K.
Then on top of that you have a fixed element for Linux itself.
So 512Mb of memory requires at least 4Mb of page table to keep track of
it. 1Gb requires 8Mb.
If you had 10 processes using 400 Mb each, your TVM = 4GB requiring 32Mb
to keep track of it.
So to answer your question the size of the overhead will grow with the
work set you use.