Earlier this week I posted a question concerning a server process that had
to be connected to twice by the client to get read()s and write()s going
properly. Andrerw Gierth was kind enough to respond with a couple of
suggestions that ultimately led me to find out what was wrong, but now I want
to know why!
Here is what happens:
the client connect()s with the server,
the server accept()s the client,
the server does a read( client_sock...) returns 0 && errno = 0,
the server does a write to the client,
the server goes into another read( client_sock, ... ) returns 0 && errno = EBADFthe server returns to accept() loop,
I SIGQUIT the client,
Immediately re-invoke the client, and connect successfully until the client
ends the session.
This behaviour is the same when I telnet to the host and port. First time
nada, second time the read()s and write()s go as expected.
To correct this behaviour I changed the line of code that had my accept()
statement:
OLD-> if ( client = accept( server_sock, ... ) == -1 )
take care of error;
else
continue on fork() child to handle session...
NEW-> client = accept( server_sock, ... );
if ( client == -1 )
take care of error
else
continue on, fork() child to handle session...
Why doesn't it work the first time in the composite if() statement? I have
had one answer so far that suspected this was an artifact of compiler
optimization. The platforms this code has run on are DEC Alphas, and a
Pentium and 486x running different flavors of Solaris 2.x.
Thanks,
KFW
-
-
sig schmig