configuring amount of memory a process can use?

configuring amount of memory a process can use?

Post by tom » Sun, 13 Jul 2003 07:07:45



Hi,

    I'm running tomcat-4.0.5 on openbsd 3.2 and if I run the tomcat process
as user "root" I never get any complaints that there isn't enough memory.
The problem is if I run another tomcat process (on a different port) as
another non-root user, tomcat complains that it is out of memory for certain
applications. This doesn't happen if I run both instances as "root".

    Is there somewhere I can configure the amount of memory a user or
process can use?

Thanks,

-Tom

 
 
 

configuring amount of memory a process can use?

Post by Len Philp » Sun, 13 Jul 2003 13:02:46



Quote:> Hi,

>    I'm running tomcat-4.0.5 on openbsd 3.2 and if I run the tomcat process
> as user "root" I never get any complaints that there isn't enough memory.
> The problem is if I run another tomcat process (on a different port) as
> another non-root user, tomcat complains that it is out of memory for certain
> applications. This doesn't happen if I run both instances as "root".

>    Is there somewhere I can configure the amount of memory a user or
> process can use?

Not sure whether it's available on openbsd, but ulimit on Solaris (and
elsewhere?) can set file descriptor limits, data segment sizes, etc.
I've used it to increase resources for a certain Legato Networker
utility that would *out halfway through until I raised some limits.

--

-- Len Philpot                          ><>  --



 
 
 

1. Limit the amount of physical memory used by a process

Here are my reasons to do that:
I'm working on load balancing in a network of workstations (all BSD, at
the beginning). One obvious requirment is that a job executed for
someone else doesn't interfere with the personal use of the workstation.
The problem of CPU can be addressed with priorities. But the problem of
physical memory cannot. If an user is thinking for a few minutes, hence
leaving its workstation, the load balancing system can send a huge job
on the machine, a process that will steal all the physical pages. Even
if the job gives away when the workstation's owner is back, it will
take several seconds to bring pages from disk. Too much when you're
editing, for example.
One obvious solution would be to force the job to use only a fixed set
of physical memory. It would paginate, but with this method, the
"normal" user wouldn't be affected and would continue to give its
machine to the pool of CPU servers.
Unfortunately, it doesn't seem Unix provides a way to limit the amount
of physical memory used. The C-shell 'limit' built-in command limits
only the virtual memory, which I don't want to do. 'setrlimit' puts
only a soft limit onto the physical memory. It doesn't prevent a job
for taking everything.
So, is there something I missed or should I look at non-portable
solutions (ideas about them are welcome)?
-------------
Stephane Bortzmeyer           Conservatoire National des Arts et Metiers        

                              292, rue Saint-Martin                    
tel: +33 (1) 40 27 27 31      75141 Paris Cedex 03
fax: +33 (1) 40 27 27 72      France    

"C'est la nuit qu'il est beau de croire a la lumiere." E. Rostand

2. Have a site and looking for suggestions for Linux Page

3. How to see memory used per user; memory used per process

4. s.n.a.f.u. 1b-i

5. maximum amount of memory per process

6. Problem With TLI-Programming

7. How to find out amount of time used by a process ?

8. Linux Frequently Asked Questions with Answers (FAQ: 2/2)

9. Trouble using large amount of memory with Redhat 7

10. Programs for checking amount of memory used?

11. Using large amounts of memory in the kernel

12. Using C to retrieve amount of free memory?

13. How to find out the memory size used by a process or check its memory leaking?