Huge apache processes

Huge apache processes

Post by Brian Atkin » Wed, 19 Nov 1997 04:00:00



Last night we gave all of our users their own subdomains
(mycompany.hypermart.net), which entailed adding about 1000
virtual hosts to our 1.2.4 server's httpd.conf file. We have
about 1200 in there altogether now. The problem is that I
have to unlimit all the limits in csh just to get the server
to run, and then each of the processes reports its size as
50MEGS under ps. Our server has a lot of mem, but even so it
I am seeing "cannot fork/no mem" type messages in the error
log if it gets anywhere near busy. Anyone know what the problem
is? Is 1.2.4 just not able to handle that many virtual hosts?
These all are non-IP virtual hosts BTW.
--
The future has arrived; it's just not evenly distributed.
                                                       -William Gibson
______________________________________________________________________
Visit Hypermart at http://www.hypermart.net for free virtual hosting!

 
 
 

Huge apache processes

Post by Brian Atkin » Wed, 19 Nov 1997 04:00:00


Further info on this mystery:

I tried 1.3b2 with exact same results.
We have grown a bit today so that now we have close to 1300
virtual hosts, and now the server will not even start up
(even with all shell limits unlimited), failing with the
message: "Ouch!  malloc failed in malloc_block()"

I was under the impression that non-IP virtual hosts were
going to free ISPs from the severe limitations imposed by
IP-based virtual hosting (i.e. only 200 or so hosts per
physical server). Apparently there is a ways to go before
we will see 10k hosts per server... It's a shame too, since
Hypermart is attempting to provide a genuinely nice service
to businesses wishing to have a nice home on the net for
free.

I reported this bug to the Apache group, but they seem to have
dismissed it already as unimportant.


> Last night we gave all of our users their own subdomains
> (mycompany.hypermart.net), which entailed adding about 1000
> virtual hosts to our 1.2.4 server's httpd.conf file. We have
> about 1200 in there altogether now. The problem is that I
> have to unlimit all the limits in csh just to get the server
> to run, and then each of the processes reports its size as
> 50MEGS under ps. Our server has a lot of mem, but even so it
> I am seeing "cannot fork/no mem" type messages in the error
> log if it gets anywhere near busy. Anyone know what the problem
> is? Is 1.2.4 just not able to handle that many virtual hosts?
> These all are non-IP virtual hosts BTW.
> --
> The future has arrived; it's just not evenly distributed.
>                                                        -William Gibson
> ______________________________________________________________________
> Visit Hypermart at http://www.hypermart.net for free virtual hosting!

--
The future has arrived; it's just not evenly distributed.
                                                       -William Gibson
______________________________________________________________________
Visit Hypermart at http://www.hypermart.net for free virtual hosting!

 
 
 

1. Unfeasibly huge Apache process

We had an odd problem with a web server today: One of the Apache child
processes had grown to 1.5GB in size (no typo!) and was therefore
failing to fork for CGI requests. Has anyone seen this happen before?
What is likely to have caused it?

The system is an E3000 running Solaris 2.5.1 and Apache 1.3b3 and is
dedicated to serving CGIs.

Tony.
--

2. CC and sockets (HELP)

3. Apache httpd process getting huge!

4. Shell programming language manual and or internet sites

5. If a process occupies huge memory, how it affect other process?

6. Any little *sql?

7. Huge (I mean HUGE) log files vs. performance

8. Unix manuals

9. HUGE HUGE Netscape memory leak in RH 5.1

10. Httpd processes geeting huge

11. xterm: huge process size and resident set size

12. Linux freeze when processing huge files

13. Zombie processes "owned" by apache processes ...