[Sorry to those who see this twice. It went to c.o.l.d.a by mistake
the first time, so I am reposting it here.]
[snip]
[snip]Quote:> This looks bad. Without any limits set, the system should swap
> for a long time and then kill the process. Unless this poster
> failed to wait long enough, page tables ate all the memory.
Has anyone yet verified that the problem is one of page tables filling
all of physical memory? If so, I'd consider this to be a serious bug
which we should try to consider solutions to. Note that I am trying to
initiate discussion. I am not volunteering to do the work. I really
don't know enough about the mm subsystem in Linux to do much.
The first thing that comes to mind is to abandon lazy allocation.
However, I have no intention of starting that thread again. Besides,
even without lazy allocation if I have very little physical memory and a
lot of swap space then it is still possible to over-run physical memory
with page tables before virtual memory becomes over committed. So let's
look for other solutions.
Another possibility is to make the page tables swappable. Note the
distinction between swapping and paging. What I am suggesting is that
in the rare case when all of physical memory is taken up with page
tables select a process (a sleeping one if possible) and write ALL of
its page tables to swap space and reclaim the memory those page tables
previously occupied. Since this is (hopefully) a rare condition there
shouldn't be too many concerns about the granularity. I doubt there is
really a need for LRU paging of page tables. The page tables for the
kernel would, of course, not be swappable.
A third possibility which occurs to me is to add per-process and/or
per-user resource limits for page table space. The problem I see with
this is that many things need to be changed if we add something to
struct rusage, such as the (u)limit built-in shell commands. Of course
if this were done with a similar but separate mechanism we'd not need to
change a bunch of stuff. This also doesn't cure the problem, since the
only way this mechanism would be 100% effective on a machine with many
users would be to make the limit unusable small.
The simplest solution which comes to mind is just to kill a process when
we are unable to allocate a page for the page tables. This could either
be the process which uses the most memory, the most page tables space
(not always the same as most memory), or the process which needed the
page. As I understand it this is what is done now when a "normal" page
is needed for a process.
Comments, corrections and cash are welcome :-)
--
Paul H. Hargrove All material not otherwise attributed