I have a socket-based client-server application that acts as follows:
1. Client sends "task" data to server.
2. Client waits for results from server.
3. Client receives results and prints them to a file.
1. Server waits for clients to connect.
2. Server receives "task to perform" data from client.
3. Server forks:
(a) Child performs the task via a system() call.
and returns data to client.
(b) Parent loops back (1.) and waits for the next
Without performing a "WAIT" on the child process, zombie/defunct
processes get created. This will eventually overflow the file
descriptor table (true???) as each time.
If I do a wait() explicitly, only one request can be running
concurrently at one time.
So, I have written a signal-handler routine "do_wait()" to
perform the wait(). Of course since the child process performs
a system call to execute the task, the child code needs to
revert to the default signal handler or else system() will
return an ECHILD(10) error. (It took me a while to figure that
one out --> why the hell was the system command working correctly
and ouputting all the correct information but a "-1" was being
Anyways, after all that, I am now getting an error:
"accept failed: Interrupted system call"
==> [EINTR] The call was interrupted by a signal before a valid
So, I am stuck with the dilemma of HOW DO I PREVENT THE CREATION
OF DEFUNCT PROCESSES WHILE ALLOWING CONCURRENT PROCESSES TO RUN??
Any help is appreciated.
Jim Stelzig |
ESN 393-1274 | Recovery-Tools Support