I am writing a TCP Socket server that has to support 100 simultaneous
connecting clients. This means that the 100 clients will simultaneously do a
connect() and the server has to accept the connections gracefuly without
dropping anybody. I am using Windows XP Pro.
To test my server I wrote a test application that creates 100 threads, which
wait for a signal to start.
After creating the threads, the main loop signals them to start connecting.
Each thread does 100 connect/shutdown sequences and exits.
After all the threads are finished, the program prints the test results.
My problem is that sometimes connect() is unable to complete and
WSAGetLastError() returns WSAEADDRINUSE.
According to the documentation this is a bind problem, but I'm not doing a
bind() on the client side, which in theory is done automatically when the
connection is successful.
This does not happen if I reduce the amount of threads to 10.
Can this be happening due to a bug in connect( ) using an unbinded socket?
Does connect() protect itself correctly against race conditions?
Maybe several simultaneous connects() are trying to reserve the same port,
and only one succeeds.
How would setsockopt( SO_REUSEADDR ) affect this behaviour?
Would this disappear if I bind the socket before issuing the connect()
Are ConnectEx and AcceptEx and overlapped I/O necessary for this type of
performance or are connect() and accept() good enough?