Step one, write a small test in "C" that opens 2048 sockets.
If it succeeds, it's a bug in Java. Contact the Blackdown folk, to
see if they have a suggestion. Optionally, get their code from them,
find the bug, fix it, and return the patch back to them. Remember,
this is what open source is about...
If it fails, it's a bug in the Linux OS, or possibly your setup.
Rephrase your question to the appropriate Linux discussion group.
Be prepared to do some work on your own, since this is open source.
Remember, there's no such thing as an insoluable problem, as long as
you have the source. Problems just have shades of inconvenience.
Also, since you have control over the client, the server, and the
protocol, a simpler way ( and more scalable way) of doing things would
be to redesign your server to only have about 250 sockets open per
process, since that's probably the maximum number of threads per
process for good performance. Handle additional requests via
redirects to other spawned processes on other ports, or other
machines...
Please, no applause, just throw money.
Jim
>I know this this has been discussed over and over but there seem to be no
>real solution for this problem. Here's the problem:
>I am developing a client/server chat application that requires at least 2048
>simultaneous socket connections. The server portion is running on Linux
>2.3.4 using blackdown's JDK 1.1.7. Somehow after reaching about 500
>connections, the whole application fails.
>I have attempted recompiling the kernel and updating OPEN_MAX, FD_SETSIZE
>and NR_OPEN to 2048 - this causes a core dump when a 600 sockets connections
>are established.
>I've inserted the following in the rc.sysinit file and that did absolutely
>nothing:
>echo 16384 >/proc/sys/kernel/file-max
>echo 32768 >/proc/sys/kernel/inode-max
>What is the solution to this problem if any? Who do I need to contact to
>resolve this issue? Is there a tested solution that someone can recommend.
>Al Nios