Memory loss with lots of memory?

Memory loss with lots of memory?

Post by Tom Ku » Wed, 01 Oct 1997 04:00:00



Howdy,
        I have a question regarding some strangeness that was noted on
a Linux machine that is being used as a compute-server.  It's a
Pentium Pro 200, 512M of RAM, 256K cache.  The process(es) that is
being run on this machine will take up approximately 445M of RAM.
These include C programs that will malloc sizes between 512k at once
and 200M at once.  Thus, it is a relatively small number of mallocs
performed to grab a large amount of space.  However, once it reaches
445M of space or so, processes start swapping out, and it is noted
that the remaining RAM, about 70M, is being tied up.  This is
happening while running under X, which initially consumes less than
10M of RAM.  There is only 1 user on the system, and it is not
networked to anything else that would allow any other user in.
        The owner of this machine wants to purchase another 512M for
it, but would be hesitant of it if 10%-15% of the RAM will be tied up
doing something else.  Does anybody know what that 70M of ram is being
used for, and whether or not it would stay at 70M or jump to about
150,if total ram was increased to 1G?

Thanks,
Tom

 
 
 

Memory loss with lots of memory?

Post by Chris Cradoc » Thu, 02 Oct 1997 04:00:00



> Howdy,
>         I have a question regarding some strangeness that was noted on
> a Linux machine that is being used as a compute-server.  It's a
> Pentium Pro 200, 512M of RAM, 256K cache.  The process(es) that is
> being run on this machine will take up approximately 445M of RAM.
> These include C programs that will malloc sizes between 512k at once
> and 200M at once.  Thus, it is a relatively small number of mallocs
> performed to grab a large amount of space.  However, once it reaches
> 445M of space or so, processes start swapping out, and it is noted
> that the remaining RAM, about 70M, is being tied up.  This is
> happening while running under X, which initially consumes less than
> 10M of RAM.  There is only 1 user on the system, and it is not
> networked to anything else that would allow any other user in.
>         The owner of this machine wants to purchase another 512M for
> it, but would be hesitant of it if 10%-15% of the RAM will be tied up
> doing something else.  Does anybody know what that 70M of ram is being
> used for, and whether or not it would stay at 70M or jump to about
> 150,if total ram was increased to 1G?

> Thanks,
> Tom

 Page tables are the only thing which would grow as memory grew. They
aren't
swappable either and for a given work load are a fixed overhead. Think of
them
like directories on a disk system (why can't I save 1000000 1Kb files on
a 1Gb disk ?)

You can calculate the total size of the page tables required for a given
allocated total
virtual memory size(TVM)  (ie all processes added together) by:

All sizes in 1K chunks,required memory for page tables = ((TVM/4)/128 +1
)*4

(I probably have my numbers wrong here, each page is 4K and you can fit
128 page descriptors
in a 4K page.) There is also a much smaller overhead for  the control
structure which lists page
table descriptors, the size of which depends on the way memory was
allocated, however I believe
Linux simply uses one for the system and one each for each process, so
its going to be only a few K.
Then on top of that you have a fixed element for Linux itself.

So 512Mb of memory requires at least  4Mb of page table to keep track of
it. 1Gb requires 8Mb.
If you had 10 processes using 400 Mb each, your TVM = 4GB requiring 32Mb
to keep track of it.

So to answer your question the size of the overhead will grow with the
work set you use.

--
Chris Cradock


 
 
 

Memory loss with lots of memory?

Post by Andreas Dilg » Thu, 02 Oct 1997 04:00:00




>    I have a question regarding some strangeness that was noted on
>a Linux machine that is being used as a compute-server.  It's a
>Pentium Pro 200, 512M of RAM, 256K cache.  The process(es) that is
>being run on this machine will take up approximately 445M of RAM.

You should also be aware that a 256K cache isn't enough to cache more
that 64MB or RAM, so you will be suffering a huge performance hit.
I assume you are already declaring the full 512MB RAM to the kernel
because Linux won't automatically detect it all.

Quote:>These include C programs that will malloc sizes between 512k at once
>and 200M at once.  Thus, it is a relatively small number of mallocs
>performed to grab a large amount of space.  However, once it reaches
>445M of space or so, processes start swapping out, and it is noted
>that the remaining RAM, about 70M, is being tied up.

Two issues to be considered here.  One is that Linux itself will be using
some memory for X11 (which grows as more windows are created), for file
cache, for other tasks, etc.  Also, if you are trying to allocate large
chunks of memory, freeing some, and then allocating more memory, it may
be that you have memory allocated to your process heap which can't
fulfill the new requests, and you need more memory even though some is
technically "free".

I can't tell from your posting if this is a stupid suggestion or not, but
have you tried using "top" to see which processes are consuming the memory?

Cheers, Andreas
--
Andreas Dilger   University of Calgary  \"If a man ate a pound of pasta and
                 Micronet Research Group \ a pound of antipasto, would they
Dept of Electrical & Computer Engineering \   cancel out, leaving him still
http://www-mddsp.enel.ucalgary.ca/People/adilger/       hungry?" -- Dogbert

 
 
 

Memory loss with lots of memory?

Post by Mark Ha » Fri, 03 Oct 1997 04:00:00


: You should also be aware that a 256K cache isn't enough to cache more
: that 64MB or RAM, so you will be suffering a huge performance hit.

false.  the 430fx,vx,tx chipsets are limited like this, but the
430hx is commonly provided with 11 bits of tag.  and, more relevant
to the original poster, the P6 family caches either 4G or 512M (PII).

: I assume you are already declaring the full 512MB RAM to the kernel
: because Linux won't automatically detect it all.

also false.  recent linux kernels will indeed detect > 64M.

: >These include C programs that will malloc sizes between 512k at once
: >and 200M at once.  Thus, it is a relatively small number of mallocs
: >performed to grab a large amount of space.  However, once it reaches
: >445M of space or so, processes start swapping out, and it is noted
: >that the remaining RAM, about 70M, is being tied up.

two words: internal fragmentation.  this is the reason that recent
versions of malloc use mmap to seperately allocate any large chunks
of memory.

regards, mark hahn.
--

                                        http://neurocog.lrdc.pitt.edu/~hahn/

 
 
 

Memory loss with lots of memory?

Post by Scott Gregory Mill » Sun, 05 Oct 1997 04:00:00


:
: You should also be aware that a 256K cache isn't enough to cache more
: that 64MB or RAM, so you will be suffering a huge performance hit.
: I assume you are already declaring the full 512MB RAM to the kernel
: because Linux won't automatically detect it all.
:

Question regarding the above, since the Pentium II has 512k of cache,
does this mean the system is limited to 128 mb of cachable memory?  Can
you upgrade the cache on a P2?

                                Scott Miller

 
 
 

Memory loss with lots of memory?

Post by Martin Schen » Tue, 07 Oct 1997 04:00:00



> :
> : You should also be aware that a 256K cache isn't enough to cache more
> : that 64MB or RAM, so you will be suffering a huge performance hit.
> : I assume you are already declaring the full 512MB RAM to the kernel
> : because Linux won't automatically detect it all.
> :

> Question regarding the above, since the Pentium II has 512k of cache,
> does this mean the system is limited to 128 mb of cachable memory?  Can
> you upgrade the cache on a P2?

>                                 Scott Miller

The statement "256K gives you only 64Mb of cacheable RAM" is valid
only for (some ?) Pentium chipsets (so you better check for your
chipset whether you need more cache ram, a certain kind of tag ram
or whatever).

It does not apply to PentiumPro and Pentium II.
AFAIK, the PPro is able to cache all its address space (at least 4Gb,
using some special addressing schemes perhaps even more) and the P II
has been crippled by Intel so that he cannot cache more than 512Mb
(and you can only run a maximum of 2 P II in a SMP system).

I guess they crippled the P II so they can still sell the PPro for
big server systems (you might have noticed that the price for a 200
Mhz 256Kb cache PPro has been stable[=remained relatively high] for
a very long time).

Next year, they will introduce a version of the P II which will be
useable in big servers (>2 CPUs, >512Mb RAM, more L2 Cache).

--

Richard-Wagnergasse 46, A-8010 Graz, Austria

 
 
 

Memory loss with lots of memory?

Post by Mark Ha » Fri, 10 Oct 1997 04:00:00


: The statement "256K gives you only 64Mb of cacheable RAM" is valid
: only for (some ?) Pentium chipsets (so you better check for your

actually, all Intel chipsets have the tag bits wired to address bits,
rather than floating above cache.  so they cache 64M regardless of
cache size.  in 486's, this was often not the case; it may also not
be the case in non-Intel chipsets.

short story is:
        486: depends on cache size, tag bits and policy.
        586: 64M, except for some 430HX boards (512M).
        686: >1G for P6, 512M for PII.

regards, mark hahn.
--

                                        http://neurocog.lrdc.pitt.edu/~hahn/

 
 
 

Memory loss with lots of memory?

Post by Mattias Hembru » Fri, 10 Oct 1997 04:00:00




>: The statement "256K gives you only 64Mb of cacheable RAM" is valid
>: only for (some ?) Pentium chipsets (so you better check for your

>actually, all Intel chipsets have the tag bits wired to address bits,
>rather than floating above cache.  so they cache 64M regardless of
>cache size.  in 486's, this was often not the case; it may also not
>be the case in non-Intel chipsets.

>short story is:
>    486: depends on cache size, tag bits and policy.
>    586: 64M, except for some 430HX boards (512M).
>    686: >1G for P6, 512M for PII.

I'm assuming this is 256K cache only? What about 512K that all the newer
586 motherboards seem to be using? Does it simply double to 128M of
cacheable space, or does your lower 64M simply have twice as much cache, and
above 64M, there's still nothing being cached?

Mattias
--

Software Support, Electrical and Computer Engineering,
University of Waterloo, Waterloo, Ontario

 
 
 

1. repeated failed open()'s results in lots of used memory [Was: [Fwd: memory consumption]]

You write:

You are probably creating negative dentries.  Check /proc/slabinfo for
the number of dentries, and it will confirm this.  I'm not sure why
that would cause swapping, but then again I haven't checked the policy
for shrinking the dentry cache recently, and there have been a number
of changes in that area lately.

Cheers, Andreas
--
Andreas Dilger  \ "If a man ate a pound of pasta and a pound of antipasto,
                 \  would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/               -- Dogbert

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in

More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

2. A socket in a web page?

3. Loss of performance when memory > 16 Mb

4. Excel files becoming unreadable

5. Perplexing memory loss problem

6. Combo DDR RAM and Winfast mainboard

7. 1.2.8: Huge Memory losses: SOLVED!

8. WWW Internet and Intranet Security Issues for Effective Systems Development Course from UC Berkeley Extension in San Francisco this fall

9. Memory loss, leak?

10. help--power off->memory loss

11. pl11 & memory loss then thrashing

12. Perplexing memory loss problem

13. Reasonable memory loss?