>Does anybody have any 'field' experience about how much faster/less
>reliable UDP is than TCP? For example, if UDP is five times faster going
>between two machines on the same subnet, I could have 4 UDP packets
>followed by a TCP packet for positional updates. What sort of bandwidth can
It all depends on the packet loss; try a ping and see what percentage of
packet loss you get. If there is none, as should happen on the local
subnet, then TCP is going to be as fast as UDP. After all, it's all IP
packets so the packets themselves take the same amount of time to get
there. OTOH, if there is packet loss, TCP will wait for a
retransmission before anything else gets through. If the retransmission
itself, or some acknowledgements that would open up transmission window,
get lost, the TCP connection can be stalled for quite a bit. A good way
to get a feel of this is to do a talk request over an overseas link, you
see how at some times the transmission appears to stop, then you get 5
lines of the other person typing. That's the price TCP pays for
Quote:>If you have N clients connected through TCP, then each client has an array
>of N-1 sockets, one for each client. I assume you'd use a SIGIO to get
>reads and then select() to see which one caused the SIGIO.
right.. though I'd recommend just setting a flag (in a volatile var) in
the SIGIO handler, and doing the actual work somewhere in the main loop,
so you avoid reentrancy problems.
>So here's my Master Plan so far;
>1 UDP multicast group which sends sync packets several times per second to
>keep each client's real-time clock in sync and to send time-stamped
>not-required information at a high rate (say 10Hz?) Forget about UDP
>retransmission - the client should be able to interpolate between UDP
>packets and try and keep 'jumps' hidden from the player when a correct TCP
>packet arrives. Primary consideration is speed, which'll be limited mostly
sounds good so far .. though you can really trust computers' clocks to
keep pretty much in time.
Quote:>1 TCP network that connects each client to every other client (or a
>star-type network with a server at the centre) for essential messages and
>once-per-second sync packets. I suppose a star-type arrangement is best
>because that'd use N socket pairs rather than N*(N-1)/2. TCP packets will
>be the 'bedrock' - any UDP packets with a timestamp greater than the
>next-expected TCP packet will be stored for later action.
which means that basically your whole game will be stalled if TCP locks
up for a bit... don't count on once-per-second packets doing it over TCP
if you want the game to be playable over bad connections... OTOH if
you're limiting yourself to good connections with hardly packet loss
like a local subnet, then you can do much everything over TCP itself.
Quote:>Is it OK to have a TCP socket set up on a separate port on the client side
>that's set to listen(...,5) at the start and is only accept(....)ed when
>the client gets a new-client message from the server?
sure .. you want to select() for reading on it before calling accept(),
to make sure there's a connection to be accept()ed (otherwise the call
Quote:>Last question - are there any conventions/suggestions for picking multicast
>group ip addresses?
no clue about multicast...
Roger Espel Llima