From: Kyler Jones <redman>
Date: Fri, 28 Apr 1995 01:19:06 GMT
I'm very concerned about the way Linux handles disk caching. I
have 16 megs of RAM. The problem is when I load a largish program
(such as Netscape under X) and then exit the prgram, most of my RAM
has been claimed by buffers. If I then run a new largish program
(such as xdoom) the buffers aren't properly returning memory for
application use. What's more, sometimes the system will go into a
seemingly endless loop of disk activity.
Is there some way to flush buffers on command or even disable
dynamic disk caching??
I seem to see the exact opposite, which is a problem for my group of
Linux users: Memory consumed by running processes never gets swapped
out if does not exceed the amount of RAM in the box. This causes the
buffers to get progressively smaller which has a very bad effect on
compiling of large source trees.
In my case, my system has 3 C++ hackers using Emacs and X, and before
you know it most of our 32MB of RAM is all taken up and compiles start
thrashing. To reduce this problem, every so often I run a little
program that allocates and zeroes up to close to 32MB (in stages).
This causes everything to get swapped out and only what is actually
being used gets swapped back in. This is painful for about 30 seconds
but then system performance improves dramatically. The next obvious
thing to do is to upgrade to 64MB!
It seems to me that main memory pages consumed by existing processes
should be aged and swapped out on a continuous basis (up to a certain
point) so that the buffers can grow to a sysadmin set-able amount.
Does Linux have anything to help in this area?