We thought of running separate binaries, but in the end adopted a
"phased" approach to the problem. Since most of the open file handles
were caused by log files being opened, we simply put all virtual host
logs into one file and then split them out with the script every week.
That freed up about 100 handles. I think if it creeps back up again
we'll try using two binaries - maybe look at a scripted kludge to keep
the httpd.conf files together in some way so the configs don't start
drifting apart...
JJ
PS: We could of course re-compile the kernel with a higher limit, but
there must be some reason why it's set like it is - or can you just
whack it up to a million?
> > Hi,
> > Wondering if anyone is running Apache with a HARD_SERVER_LIMIT of
> > 1024, especially on Solaris 7. Also interested in anyone who has
done
> > something OTHER than increase this to 1024, like running multiple
> > instances of Apache bound to different IPs with a load balancer or
> > something.
> IIRC, the HARD_SERVER_LIMIT is 256 on all Unices; it's 1024 for
threaded
> Apache's (e.g. Microsoft). If you want to change it, I don't think
there's
> anything magic about 1024.
> --
> Joe Schaefer