TCP client/server programs....
If multiple messages come into a socket
and they are not immediately received, does
UNIX queue them at the receiving socket end or
does the protocol cause some internal error and
the message is held up at the sending end and
is retransmitted or what?
Here's the actual puzzle. I have
a very simple TCP client and TCP server.
Each process is running on the same host in
this case but I get the same stuff occurring
if they are on separate hosts. Anyway, the
whole thing is just a simulation of error
control automatic repeat request protocols--ARQ--
so all the thing is doing is sending about 30
frames of data from server to client. The
client detects whether the frames are good
or damaged and then returns a short two byte
frame positive or negative acknowledgement.
Two of the protocols simulated allow the
server to send multiple, say 4 or 8, frames
before attempting to receive the acknowledgement
from the client.
Note: It would have been nice to use an
asynchronous receive--like having the
SIGIO (or whatever) signal triggering a
receive function at both client and server
ends.....but I couldn't get that working in
my version of UNIX.....so I just do it very
simply another way. (Blocking receives are
handled by SIGALRM after a timeout....)
ANYway, the problem is that sometimes the whole
30 frames are sent and postively acknowledged
in something like 28-30 millisecs. But half the time,
the thing will take 200-300 millisecs. There
are no other processes running on the host(s).
Too, the one protocol (Stop & Wait), which has
the server sending just one frame, waiting for
the acknowledgement, sending the next frame, etc...,
ALWAYS runs in the 28-30 millisec range.
There doesn't seem to be much of a pattern as
far as which frames take longer or whatever.
(I record the time at the server end for each
frame sent or received.)
One more thing, I need to write up the report
today, May 15 or tomorrow.
Thanks for any insight!!!!!!