Hello,
I have a multi-threaded echo server designed like so:
- there's a main threads and some number of worker threads
(threadpool)
- main thread periodically (every 200 ms) does a select() over client
sockets
- depending on select(), main thread either does accept() or sends a
message
to the worker thread pool (using POSIX message queues)
- worker threads are in a loop waiting to mq_receive(); upon getting a
message
they read() from the client socket (which was indicated in the
message) and write() it back exactly to the client socket
All this is on Solaris 8 SPARC, using gcc with -lsocket -lpthread
-lrt, and using plain vanilla SOCK_STREAM sockets.
The behavior I am seeing on two clients is (imagine a timeline growing
downward):
client A client B
connects connects
sends message1 sends message1
gets echo
sends message2 sends message2
gets echo
disconnects
gets echo of strcat(message1, message2)
In other words, the second client does not get *any* response back
until the first client has disconnected. Needless to say, this sucks.
I am looking for answers/explanations for the following questions:
1. Why does the second client have this "hysteresis" effect?
2. Is there a better way - than mq_xxx - to send messages to a
threadpool?
3. All of the sockets (listening socket, client sockets) are plain
vanilla SOCK_STREAM sockets. Should I be setting some flags on them?
4. Should the worker threads read() or write() differently, e.g. with
flags or other use some other function?
5. The number of threads in the threadpool is arbitrary. It's usually
5-10. Will it go faster if there are more threads? Is there a
threshold?
6. What's a good way to debug multi-threaded processes? (I'm using
lots and lots of printfs.) Is there a de* that can pinpoint the
timeline more or less accurately?
Thanks in advance!
John