socket send packets lost

socket send packets lost

Post by Frank Natol » Sat, 21 Dec 2002 04:54:20



Win2000 SP3. WSAStartup MAKEWORD(2,2). Create socket
PF_INET, SOCK_STREAM, IPPROTO_IP. No calls to change
blocking status (thus sync not async).

When socket send is invoked rapidly enough to apparently
exhaust some internal Win2000 buffer, rather than block it
appears to simply begin discarding packets and never
recover. Tried doing getsockopt SO_SNDBUF which returned
8kb. Tried doing setsockopt SO_SNDBUF with value 64kb
which did not affect the problem, i.e., transmit packets
starting discarding at the same place. Socket send() is
not returning error nor a value less than the length of
the transmit packet.

Why are these transmit packets lost? Thanks.

 
 
 

socket send packets lost

Post by Alun Jon » Sat, 21 Dec 2002 05:10:02




>When socket send is invoked rapidly enough to apparently
>exhaust some internal Win2000 buffer, rather than block it
>appears to simply begin discarding packets and never
>recover.

I'm confused - how do you "invoke send rapidly enough" if it's blocking?  I'd
be more inclined to suspect a race condition - that you're somehow accessing a
different socket than the one you think you are.

Alun.
~~~~

[Please don't email posters, if a Usenet response is appropriate.]
--
Texas Imperial Software   | Try WFTPD, the Windows FTP Server. Find us at

Cedar Park TX 78613-1419  | VISA/MC accepted.  NT-based sites, be sure to
Fax/Voice +1(512)258-9858 | read details of WFTPD Pro for XP/2000/NT.

 
 
 

socket send packets lost

Post by Frank Natol » Sat, 21 Dec 2002 05:37:45


The server program, upon a successful listen/accept,
blasts a barrage of relatively small initialization
packets (50 bytes) to the client program. As stated below,
the send() neither fails (no SOCKET_ERROR) nor returns a
value less than the number of bytes requested. The client
program, however, stops receiving packets after a certain
point. If, after each call to send(), I insert a sleep
(10), the problem disappears.
Quote:>-----Original Message-----
>In article <01b101c2a798$6bba3c30



>>When socket send is invoked rapidly enough to apparently
>>exhaust some internal Win2000 buffer, rather than block
it
>>appears to simply begin discarding packets and never
>>recover.

>I'm confused - how do you "invoke send rapidly enough" if

it's blocking?  I'd
Quote:>be more inclined to suspect a race condition - that

you're somehow accessing a
Quote:>different socket than the one you think you are.

>Alun.
>~~~~

>[Please don't email posters, if a Usenet response is
appropriate.]
>--
>Texas Imperial Software   | Try WFTPD, the Windows FTP
Server. Find us at
>1602 Harvest Moon Place   | http://www.wftpd.com or email


Quote:>Cedar Park TX 78613-1419  | VISA/MC accepted.  NT-based
sites, be sure to
>Fax/Voice +1(512)258-9858 | read details of WFTPD Pro for
XP/2000/NT.
>.

 
 
 

socket send packets lost

Post by Alun Jon » Sat, 21 Dec 2002 06:34:33




>The server program, upon a successful listen/accept,
>blasts a barrage of relatively small initialization
>packets (50 bytes) to the client program. As stated below,
>the send() neither fails (no SOCKET_ERROR) nor returns a
>value less than the number of bytes requested. The client
>program, however, stops receiving packets after a certain
>point. If, after each call to send(), I insert a sleep
>(10), the problem disappears.

Put a network sniffer on the line.  See what's _actually_ happening.  Between
client and server, someone appears to be dropping packets - see whether the
packets are sent or not.

Alun.
~~~~

[Please don't email posters, if a Usenet response is appropriate.]
--
Texas Imperial Software   | Try WFTPD, the Windows FTP Server. Find us at

Cedar Park TX 78613-1419  | VISA/MC accepted.  NT-based sites, be sure to
Fax/Voice +1(512)258-9858 | read details of WFTPD Pro for XP/2000/NT.

 
 
 

socket send packets lost

Post by Simon Cook » Sat, 21 Dec 2002 07:27:05





> > The server program, upon a successful listen/accept,
> > blasts a barrage of relatively small initialization
> > packets (50 bytes) to the client program. As stated below,
> > the send() neither fails (no SOCKET_ERROR) nor returns a
> > value less than the number of bytes requested. The client
> > program, however, stops receiving packets after a certain
> > point. If, after each call to send(), I insert a sleep
> > (10), the problem disappears.

> TCP is a stream, and not packets. Try reading this and see if it
helps:
> http://tangentsoft.net/wskfaq/articles/effective-tcp.html

I assure you, TCP is indeed made up out of packets of data. Maybe not at
the API level, but underneath it is. And many people (myself included)
build packet (de)serialization routines on top of TCP because TCP gives
you guaranteed transmission of data without having to roll it *all*
yourself on top of UDP.

His problem is like one I've seen myself in my own app. Data just stops
transferring for no apparent reason. About the only thing I can come up
with is some internal buffer starts stacking things up, but not
notifying winsock that there is incoming data available. I've been
through the code time after time now, and there is just no explanation
for it. SO_SENDBUF is set to 0 (I'm using overlapped IO with my own
buffering scheme), and it just stops dead for a while... then each
packet I push into the queue appears on the other side is 3 or so behind
what's actually being sent out onto the network at that time.

No errors, no nothing. Just weird transmission issues. Both systems just
sit and pull data out of their read buffers continuously, and the load
and amount of data transmitted is low, so there shouldn't be an issue.

Simon

 
 
 

socket send packets lost

Post by Phil Frisbie, Jr » Sat, 21 Dec 2002 07:17:29



> The server program, upon a successful listen/accept,
> blasts a barrage of relatively small initialization
> packets (50 bytes) to the client program. As stated below,
> the send() neither fails (no SOCKET_ERROR) nor returns a
> value less than the number of bytes requested. The client
> program, however, stops receiving packets after a certain
> point. If, after each call to send(), I insert a sleep
> (10), the problem disappears.

TCP is a stream, and not packets. Try reading this and see if it helps:
http://tangentsoft.net/wskfaq/articles/effective-tcp.html

Phil Frisbie, Jr.
Hawk Software
http://www.hawksoft.com

 
 
 

socket send packets lost

Post by Alun Jon » Sat, 21 Dec 2002 08:03:10




>I assure you, TCP is indeed made up out of packets of data. Maybe not at
>the API level, but underneath it is. And many people (myself included)
>build packet (de)serialization routines on top of TCP because TCP gives
>you guaranteed transmission of data without having to roll it *all*
>yourself on top of UDP.

However, it also gives you the effect of rolling everything into a stream.  
You send a hundred "packets" of fifty bytes apiece, and you don't get a
hundred "packets" of fifty bytes apiece, you get a stream of five thousand
bytes.  It's broken up and re-assembled at any number of places along the
path.  You'll probably end up with one recv() call of the first fifty bytes,
then another three of about 1500, and finally a recv() of ~450 bytes.  
Probably.  Of course, you'll also get any number of other possibilities going
on, just so long as it adds up to the right amount of data in the end.

Quote:>No errors, no nothing. Just weird transmission issues. Both systems just
>sit and pull data out of their read buffers continuously, and the load
>and amount of data transmitted is low, so there shouldn't be an issue.

The mention of "data loss" along with "TCP is packets" is a frequent
combination, and it's almost always wrong.  When data loss occurs, that's a
_serious_ bug in TCP/IP.  It's not something that people wouldn't have
noticed, because web sites wouldn't download, zip files would be corrupt, etc,
etc.  It'd be roughly akin to having molecular bonds lose all their cohesion -
catastrophic, even in relatively small quantities.  So, it's _highly_ unlikely
that data loss is occurring.  Then we go to the likely occurrences here.  
You've set buffer sizes to zero.  That's a really dangerous thing to do,
particularly if you don't understand what you're doing.  You also haven't run
a network monitor to see what's actually going across the wire.

What I see often reported as "data loss" is the following:
sender calls send() repeatedly with, say fifty bytes in each send.
receiver calls recv() repeatedly, and expects to receive fifty bytes in each
receive.
receiver either gets a smaller amount, or a larger amount, and gets out of
sync with the sender.
developer asserts that data has been lost, without realising that it is his
code that is losing the data, not the stack or the network.

Alun.
~~~~

[Please don't email posters, if a Usenet response is appropriate.]
--
Texas Imperial Software   | Try WFTPD, the Windows FTP Server. Find us at

Cedar Park TX 78613-1419  | VISA/MC accepted.  NT-based sites, be sure to
Fax/Voice +1(512)258-9858 | read details of WFTPD Pro for XP/2000/NT.

 
 
 

socket send packets lost

Post by Simon Cook » Sat, 21 Dec 2002 08:28:23


"Alun Jones" <a...@texis.com> wrote in message

news:OesM9.376$j_5.279662637@newssvr11.news.prodigy.com...

> In article <#P5i926pCHA.2756@TK2MSFTNGP09>, "Simon Cooke"
> <simonco...@earthlink.net> wrote:
> >I assure you, TCP is indeed made up out of packets of data. Maybe not
at
> >the API level, but underneath it is. And many people (myself
included)
> >build packet (de)serialization routines on top of TCP because TCP
gives
> >you guaranteed transmission of data without having to roll it *all*
> >yourself on top of UDP.

> However, it also gives you the effect of rolling everything into a
stream.
> You send a hundred "packets" of fifty bytes apiece, and you don't get
a
> hundred "packets" of fifty bytes apiece, you get a stream of five
thousand
> bytes.  It's broken up and re-assembled at any number of places along
the
> path.  You'll probably end up with one recv() call of the first fifty
bytes,
> then another three of about 1500, and finally a recv() of ~450 bytes.
> Probably.  Of course, you'll also get any number of other
possibilities going
> on, just so long as it adds up to the right amount of data in the end.

> >No errors, no nothing. Just weird transmission issues. Both systems
just
> >sit and pull data out of their read buffers continuously, and the
load
> >and amount of data transmitted is low, so there shouldn't be an
issue.

> The mention of "data loss" along with "TCP is packets" is a frequent
> combination, and it's almost always wrong.  When data loss occurs,
that's a
> _serious_ bug in TCP/IP.  It's not something that people wouldn't have
> noticed, because web sites wouldn't download, zip files would be
corrupt, etc,
> etc.  It'd be roughly akin to having molecular bonds lose all their
cohesion -
> catastrophic, even in relatively small quantities.  So, it's _highly_
unlikely
> that data loss is occurring.  Then we go to the likely occurrences
here.
> You've set buffer sizes to zero.  That's a really dangerous thing to
do,
> particularly if you don't understand what you're doing.  You also
haven't run
> a network monitor to see what's actually going across the wire.

No; I've set the send buffer size to zero; which is perfectly acceptable
when using overlapped IO.

> What I see often reported as "data loss" is the following:
> sender calls send() repeatedly with, say fifty bytes in each send.
> receiver calls recv() repeatedly, and expects to receive fifty bytes
in each
> receive.
> receiver either gets a smaller amount, or a larger amount, and gets
out of
> sync with the sender.
> developer asserts that data has been lost, without realising that it
is his
> code that is losing the data, not the stack or the network.

Well, let's see:

Here's my Client-side packet writer. It's quite simple. Socket is a
simple wrapper around the winsock api which carries the socket handle
with it. Event is a simple wrapper around a Win32 event object.
Synchronizable wraps a Win32 critical section object, whereas
Synchronize() is a class which performs a lock on creation, and an
unlock on destruction. CThreadImpl is an object-based wrapper around
win32 threading, with thunking.

A ClientNetwork object is created, which performs the connection and
host lookup (that bit of code needs debugging right now; it's faulty --
the actual network code is solid).

I still get packets (my implementation) delayed by a considerable amount
somewhere, under certain conditions. The ethernet connection is fine; no
packet retransmissions. netstat stats also appear to be fine.
Timestamping everything gives packets arriving typically within 1ms of
being sent. Except for this one lockup condition which I thought I'd
gotten rid of, where everything just slows down and new packets entering
the queue (say, because of user commands) only arrive at the remote
system after 3 or so other commands have been pushed into the queue.

It's plain bizarre.

class PacketQueue : public Synchronizable
{
public:
 class PACKET {
 public:
  BYTE* pData;
  size_t len;

  PACKET() : pData(NULL), len(0)
  {
  }

  void Delete()
  {
   if (pData != NULL)
   {
    delete[] pData;
    pData = NULL;
   }
  }

 };

private:

 deque<PACKET> queue;

 static void DeletePacket(PACKET& p);

public:

 Event dataAvailable;

 PacketQueue()
 {
  dataAvailable.Create();
 }

 ~PacketQueue()
 {
  if (!queue.empty()) for_each(queue.begin(), queue.end(),
DeletePacket);
 }

 size_t GetCount()
 {
  Synchronize to(*this);
  return queue.size();
 }

 void Clear()
 {
  Synchronize to(*this);
  dataAvailable.ResetEvent();

  if (!queue.empty()) {
   for_each(queue.begin(), queue.end(), DeletePacket);
  }

  queue.clear();
 }

 void Add(BYTE* pData, size_t len)
 {
  {

   Synchronize to(*this);

   PACKET p;
   p.pData = pData;
   p.len = len;

   if (*(reinterpret_cast<MessageType*>(pData)) == MsgMeasurementRun)
   {
    Packet_MeasurementRun* pmr =
reinterpret_cast<Packet_MeasurementRun*>(pData);
    pmr->queueTime = GetTimeStamp();
   }

   queue.push_back(p);
  }

  dataAvailable.SetEvent();
 }

 bool IsEmpty()
 {
  Synchronize to(*this);
  return queue.empty();
 }

 bool Get(PACKET& packet)
 {
  {
   Synchronize to(*this);
   if (queue.empty()) return false;

   if (queue.size() == 1)
   {
    dataAvailable.ResetEvent();
   }

   packet = queue.front();
   queue.pop_front();
  }

  return true;
 }

};

class ClientNetworkWriter : public CThreadImpl<ClientNetworkWriter>
{
 Event& cancelOps;
 PacketQueue& queue;
 WSAOVERLAPPED ov;

 Event completion;
 Socket out;

public:
 ClientNetworkWriter(Event& cancel, PacketQueue& packetqueue) :
   cancelOps(cancel), queue(packetqueue)
 {
  completion.Create();
  ov.hEvent = completion;
 };

 ~ClientNetworkWriter()
 {
 }

 void Initialize(Socket& outsocket)
 {
  if (out.IsValid()) out.Detach();

  out.Attach(outsocket);
 }

 DWORD Run()
 {
  DWORD result = RunNetworkSend();
  //DEBUG: OutputDebugString("ClientNetworkWriter has exited\n");

  out.Detach();
  return result;
 }

 DWORD RunNetworkSend()
 {

  // Add an outgoing time-sync packet.
  Packet_SyncTimeStamps pst;
  ::GetSystemTime(&(pst.systime));
  pst.masterclock = ::GetTickCount();
  BYTE* pData = new BYTE[sizeof(Packet_SyncTimeStamps)];
  memcpy(pData, &pst, sizeof(Packet_SyncTimeStamps));
  queue.Add(pData, sizeof(Packet_SyncTimeStamps));

  while (true)
  {
   if (!WaitForDataOrCancel()) return WSAECANCELLED;

   PacketQueue::PACKET p;
   queue.Get(p);

   BYTE* pTemp = p.pData;
   int len = p.len;

   while (len > 0)
   {
    WSABUF wb;
    wb.buf = (char*) pTemp;
    wb.len = len;

    DWORD unused;
    if (out.WSASend(&wb, 1, &unused, 0, &ov, NULL) == SOCKET_ERROR)
    {
     if (::WSAGetLastError() != WSA_IO_PENDING)
     {
      p.Delete();
      return ::WSAGetLastError();
     }
    }

    if (!WaitForCompletionOrCancel())
    {
     CleanUp();
     p.Delete();
    }

    DWORD count, flags;
    if (!::WSAGetOverlappedResult(out, &ov, &count, TRUE, &flags))
    {
     p.Delete();
     return ::WSAGetLastError();
    }

    if (count == 0) {
     p.Delete();
     return WSAENOTCONN; // connection closed.
    }

    len -= count;
    pTemp += count;
   }

   p.Delete();
  }
 }

 void CleanUp()
 {
  ::CancelIo((HANDLE)(SOCKET)out);
  ::WaitForSingleObject(completion, INFINITE);
 }

 bool WaitForDataOrCancel()
 {
  HANDLE events[2];
  events[0] = cancelOps;
  events[1] = queue.dataAvailable;

  return (::WaitForMultipleObjects(2, events, FALSE, INFINITE) ==
WAIT_OBJECT_0 + 1);
 }

 bool WaitForCompletionOrCancel()
 {
  HANDLE events[2];
  events[0] = cancelOps;
  events[1] = completion;

  return (::WaitForMultipleObjects(2, events, FALSE, INFINITE) ==
WAIT_OBJECT_0 + 1);
 }

};

class NetworkReader : public CThreadImpl<NetworkReader>
{
 Socket in;
 Event& cancelOps;
 Event completion;
 WSAOVERLAPPED ov;

 PacketQueue& queue;

 BYTE* pCurrentPacket; //< the packet we're currently building
 BYTE* pCurrentPacketIndex; //< offset into the current packet.
 size_t packet_left;  //< data left in the current packet
 size_t packetsize;

 Buffer readBuffer;
 bool pendingRead;

 bool gotHeader;

    // Disable copy constructor
    NetworkReader(const NetworkReader& netw);

public:
 NetworkReader(Event& cancel, PacketQueue& packetqueue) :
cancelOps(cancel), queue(packetqueue),
           pCurrentPacket(NULL), packet_left(0),
           packetsize(0), pendingRead(false), gotHeader(false)
 {
  completion.Create();
  ov.hEvent = completion;
 }

 ~NetworkReader()
 {

 }

 void Initialize(Socket& readsocket)
 {
  // Allocate a buffer to read data into

  readBuffer.AllocateBuffer();

  // Attach to read socket

  if (in.IsValid()) in.Detach();
  in.Attach(readsocket);

  // Clear the current packet and packet queue.

  pCurrentPacket = NULL;
  pCurrentPacketIndex = NULL;
  packet_left = 0;
  packetsize = 0;

  queue.Clear();

  // Reset the completion event status

  completion.ResetEvent();
  pendingRead = false;
  gotHeader = false;
 }

 DWORD Run()
 {
  DWORD result = RunNetworkRead();

  //DEBUG: OutputDebugString("NetworkReader has exited\n");

  CleanUp();

  in.Detach();
  readBuffer.FreeBuffer();
  pendingRead = false;
  gotHeader = false;

  return result;
 }

private:

 DWORD RunNetworkRead()
 {
  while (true)
  {
   // start reading on the socket.
   if (!StartRead()) return -1;

   if (!WaitForCompletionOrCancel()) return -1;

   // complete the read
   if (!CompleteRead()) return -1;

   // Take a packet from the buffer (or keep building one)
   PopPacket();
  }
 }

 void PopPacket()
 {
  if (!gotHeader)
  {
   int size;
   BYTE* pData = readBuffer.GetContiguousBlock(size);

   if (size < sizeof(MessageType))
    return;

   MessageType msg = *((MessageType*)pData);

   int len;

   switch(msg)
   {
   case MsgTelemetryNoScan:
    len = sizeof(Packet_TelemetryNoData);
    break;

   case MsgTelemetryWithScan:
    len = sizeof(Packet_TelemetryWithData);
    break;

   case MsgMeasurementRun:
    len = sizeof(Packet_MeasurementRun);
    break;

   case
...

read more »

 
 
 

socket send packets lost

Post by Alun Jon » Sat, 21 Dec 2002 11:10:17






>> You've set buffer sizes to zero.  That's a really dangerous thing to
>do,
>> particularly if you don't understand what you're doing.  You also
>haven't run
>> a network monitor to see what's actually going across the wire.

>No; I've set the send buffer size to zero; which is perfectly acceptable
>when using overlapped IO.

It's perfectly acceptable, _if_ you understand what you're doing.  Otherwise,
it's a great way to halve your network performance.

Quote:>I still get packets (my implementation) delayed by a considerable amount
>somewhere, under certain conditions. The ethernet connection is fine; no
>packet retransmissions. netstat stats also appear to be fine.
>Timestamping everything gives packets arriving typically within 1ms of
>being sent. Except for this one lockup condition which I thought I'd
>gotten rid of, where everything just slows down and new packets entering
>the queue (say, because of user commands) only arrive at the remote
>system after 3 or so other commands have been pushed into the queue.

>It's plain bizarre.

Right, like I have time (and interest) to go through that much code to try and
find your mistakes.

Here's a stab in the dark - you do realise that the Nagle algorithm and
delayed ACK are still enabled even when you set your buffer size to zero,
right?  Let me guess - the time that the "packets" (I wish you wouldn't use
that term - it indicates that you are thinking in the wrong terms for TCP) are
delayed is on the order of 200ms, give or take 55ms or so?

Alun.
~~~~

[Please don't email posters, if a Usenet response is appropriate.]
--
Texas Imperial Software   | Try WFTPD, the Windows FTP Server. Find us at

Cedar Park TX 78613-1419  | VISA/MC accepted.  NT-based sites, be sure to
Fax/Voice +1(512)258-9858 | read details of WFTPD Pro for XP/2000/NT.

 
 
 

socket send packets lost

Post by nosp » Sat, 21 Dec 2002 11:30:26


Before you 2 get into whether it's packets or not, how about I ask a
question.  Why are you wasting time with multiple sends of 50 bytes
each per send...why not put all the data together (about 100 * 50 =
5000 bytes) and send in one chunk.  I've sent as much as 50K bytes at
once.

First of all, you don't get a "stream of (5000) bytes", you get a
stack of 100 seperate sends (each of which may be broken, but all
start out as seperate "sends") must be tracked, reported and
confirmed) that have to stack up in the input buffers.  You're
treating this like you need to control packet size.  Let winsock
handle the packetizing, you just send the data.

In responce to the post:

stated...and I replied:



>>I assure you, TCP is indeed made up out of packets of data. Maybe not at
>>the API level, but underneath it is. And many people (myself included)
>>build packet (de)serialization routines on top of TCP because TCP gives
>>you guaranteed transmission of data without having to roll it *all*
>>yourself on top of UDP.

>However, it also gives you the effect of rolling everything into a stream.  
>You send a hundred "packets" of fifty bytes apiece, and you don't get a
>hundred "packets" of fifty bytes apiece, you get a stream of five thousand
>bytes.  It's broken up and re-assembled at any number of places along the
>path.  You'll probably end up with one recv() call of the first fifty bytes,
>then another three of about 1500, and finally a recv() of ~450 bytes.  
>Probably.  Of course, you'll also get any number of other possibilities going
>on, just so long as it adds up to the right amount of data in the end.

>>No errors, no nothing. Just weird transmission issues. Both systems just
>>sit and pull data out of their read buffers continuously, and the load
>>and amount of data transmitted is low, so there shouldn't be an issue.

>The mention of "data loss" along with "TCP is packets" is a frequent
>combination, and it's almost always wrong.  When data loss occurs, that's a
>_serious_ bug in TCP/IP.  It's not something that people wouldn't have
>noticed, because web sites wouldn't download, zip files would be corrupt, etc,
>etc.  It'd be roughly akin to having molecular bonds lose all their cohesion -
>catastrophic, even in relatively small quantities.  So, it's _highly_ unlikely
>that data loss is occurring.  Then we go to the likely occurrences here.  
>You've set buffer sizes to zero.  That's a really dangerous thing to do,
>particularly if you don't understand what you're doing.  You also haven't run
>a network monitor to see what's actually going across the wire.

>What I see often reported as "data loss" is the following:
>sender calls send() repeatedly with, say fifty bytes in each send.
>receiver calls recv() repeatedly, and expects to receive fifty bytes in each
>receive.
>receiver either gets a smaller amount, or a larger amount, and gets out of
>sync with the sender.
>developer asserts that data has been lost, without realising that it is his
>code that is losing the data, not the stack or the network.

>Alun.
>~~~~

>[Please don't email posters, if a Usenet response is appropriate.]
>--
>Texas Imperial Software   | Try WFTPD, the Windows FTP Server. Find us at

>Cedar Park TX 78613-1419  | VISA/MC accepted.  NT-based sites, be sure to
>Fax/Voice +1(512)258-9858 | read details of WFTPD Pro for XP/2000/NT.

-
One of many Computer Consultants in the USA
http://drshell.home.mindspring.com/
30 years in the computer field, 20 with Unisys.
-
 
 
 

socket send packets lost

Post by Simon Cook » Sat, 21 Dec 2002 17:51:18







>>> You've set buffer sizes to zero.  That's a really dangerous thing
>>> to do, particularly if you don't understand what you're doing.  You
>>> also haven't run a network monitor to see what's actually going
>>> across the wire.

>> No; I've set the send buffer size to zero; which is perfectly
>> acceptable when using overlapped IO.

> It's perfectly acceptable, _if_ you understand what you're doing.
> Otherwise, it's a great way to halve your network performance.

I'm not operating in a streaming mode here. I have a large quantity of
pretty small packets which need to go out in realtime over the network.
I can afford the system to stall occasionally for retries, but I have to
(on average) minimize the amount of time that the packets stay stacked
up -- even at the expense of network overhead. The packets must arrive
in chronological order, thus I'm using TCP/IP because I don't want to
have to reorder the messages when they arrive at the other end myself as
I would if I used UDP. I'm very aware of the issues here.

Quote:>> I still get packets (my implementation) delayed by a considerable
>> amount somewhere, under certain conditions. The ethernet connection
>> is fine; no packet retransmissions. netstat stats also appear to be
>> fine. Timestamping everything gives packets arriving typically
>> within 1ms of being sent. Except for this one lockup condition which
>> I thought I'd gotten rid of, where everything just slows down and
>> new packets entering the queue (say, because of user commands) only
>> arrive at the remote system after 3 or so other commands have been
>> pushed into the queue.

>> It's plain bizarre.

> Right, like I have time (and interest) to go through that much code
> to try and find your mistakes.

You certainly have the time and interest to make purient claims about
others' design choices, so I thought that given you were so certain
you'd have a sure-fire answer up your sleeve. Apparently not.

Quote:> Here's a stab in the dark - you do realise that the Nagle algorithm
> and delayed ACK are still enabled even when you set your buffer size
> to zero, right?

"    // Turn off Nagle slow-start algorithm
    BOOL val = TRUE;
    client.setsockopt(IPPROTO_TCP, TCP_NODELAY, (const char*)&val,
sizeof(val));
"

Quote:> Let me guess - the time that the "packets" (I wish
> you wouldn't use that term - it indicates that you are thinking in
> the wrong terms for TCP) are delayed is on the order of 200ms, give
> or take 55ms or so?

Packets, messages... whatever. I'm thinking in perfectly the right terms
for TCP, thankyouverymuch.

A packet: a collection of data treated in a single lump. I push packets
over the network. I retrieve packets from the network. Whether the
underlying protocol is a stream-based protocol or not is irrelevant; I
pass a header over which indicates the size of the data which follows
it, which I then read from the network until it's done, producing a
single *packet*. Yes, I'm aware that TCP is not message based. Yes, I
know that it does not magically slice up the stream according to message
boundaries. However, that doesn't stop me packetizing data and throwing
it over that connection. It just involves more work on my end.

And no, it's not delayed on the order of 200ms. Typical network timing
for a packet to leave the client and arrive at the server is on the
order of ~1ms. *except* when the system stalls, at which point it takes
three new messages to be sent for the first one to arrive at the server.
That is: if I pass the following:

50 byte message A        time 0s
50 byte message B        time 5s
50 byte message C        time 10s
50 byte message D        time 15s
50 byte message E        time 20s

Message A arrives at the server at time 15s (when and only when D is
posted).
Message B arrives at the server at time 20s (when and only when E is
posted).

This problem actually gets worse by maybe 20% when the send buffer is
enabled.

Simon

 
 
 

socket send packets lost

Post by Alan J. McFarlan » Sat, 21 Dec 2002 23:32:15






>>> The server program, upon a successful listen/accept,
>>> blasts a barrage of relatively small initialization
>>> packets (50 bytes) to the client program. As stated below,
>>> the send() neither fails (no SOCKET_ERROR) nor returns a
>>> value less than the number of bytes requested. The client
>>> program, however, stops receiving packets after a certain
>>> point. If, after each call to send(), I insert a sleep
>>> (10), the problem disappears.

Frank, have you considered the combined effect of Slow-Start and Delayed
Ack?  (Note that Slow-Start is entirely separate from Nagel).  Have a read
of e.g http://www.sun.com/sun-on-net/performance/tcp.slowstart.html

There's also some good stuff starting at
http://msdn.microsoft.com/library/en-us/winsock/winsock/high_performa...
dows_sockets_applications_2.asp?frame=true .  For instance:

"Another aspect of TCP/IP is slow-start, which takes place whenever a
connection is established. Slow-start is an artificial limit on the number
of data segments that can be sent before acknowledgment of those segments is
received. Slow-start is designed to limit network congestion. When a
connection over Ethernet is established, regardless of the receiver's window
size, a 4KB transmission can take up to 3-4 RTT due to slow-start."

Your words should suggest bad use of the network (/TCP): "blasts a barrage"!
Why on earth are you sending a number of small blocks of data rather than
one large one?

Quote:>> TCP is a stream, and not packets. Try reading this and see if it
>> helps: http://tangentsoft.net/wskfaq/articles/effective-tcp.html

> I assure you, TCP is indeed made up out of packets of data. Maybe not
> at the API level, but underneath it is. And many people (myself
> included) build packet (de)serialization routines on top of TCP
> because TCP gives you guaranteed transmission of data without having
> to roll it *all* yourself on top of UDP.

Did you read the document at that given URL before sending this reply?  The
heading text "Packets Are Illusions" alone is quite apt. :->

Your /understanding/ of TCP brings to mind that fact that the light is
demonstrably a wave...but can also be shown to be particles (photons) and
its likely that something else is going on a lower level.  So it depends
what level one looks at.

With TCP, at the user level it is apparently discrete by the effect of
having to pass blocks of data to its function call invocations.  Also at the
network layer it is discrete, since IP is a datagram protocol, data must be
send in discrete packets.  However just because at the two levels you
apparently (from your paragraph above) have looked at TCP and have seen it
to be discrete does not mean that TCP is a sequenced packet protocol.

To quote from RFC793--"Transmission Control Protocol":
  Basic Data Transfer:

    The TCP is able to transfer a continuous stream of octets in each
    direction between its users by packaging some number of octets into
    segments for transmission through the internet system.  In general,
    the TCPs decide when to block and forward data at their own
    convenience.

Alan

 
 
 

socket send packets lost

Post by Alun Jon » Sun, 22 Dec 2002 01:59:17




>I'm not operating in a streaming mode here. I have a large quantity of
>pretty small packets which need to go out in realtime over the network.

Then why the hell are you insisting on using a streaming protocol that will
_ensure_ that your packets get held up waiting for acknowledgements?

Quote:>I can afford the system to stall occasionally for retries, but I have to
>(on average) minimize the amount of time that the packets stay stacked
>up -- even at the expense of network overhead.

You can't do that in TCP.  I'm not sure that you can do that reliably at all -
if you don't get a packet through, it _has_ to stall further packets, or it
_has_ to lose a packet, or it _has_ to declare that the data path is no longer
usable.

Quote:>The packets must arrive
>in chronological order, thus I'm using TCP/IP because I don't want to
>have to reorder the messages when they arrive at the other end myself as
>I would if I used UDP. I'm very aware of the issues here.

No, you're not.  TCP will retry, and retry and retry, until it gets data
through.  While it is retrying, any further outgoing data is stuck to the back
of any queued data.  You can't turn TCP into a packet protocol that behaves
like a packet protocol.  You can turn UDP into a reliable packet-based
protocol, but you can't turn TCP into a packet protocol.  As soon as data is
resent, it'll be assembled with other data, and you'll find your later packets
are split.  You can't do what you're trying to do with TCP.

Quote:>You certainly have the time and interest to make purient claims about
>others' design choices, so I thought that given you were so certain
>you'd have a sure-fire answer up your sleeve. Apparently not.

If you don't want the advice, then don't take the advice.  But you aren't
going to like what you get when you try and treat TCP as something it can't
be.

Quote:>Packets, messages... whatever. I'm thinking in perfectly the right terms
>for TCP, thankyouverymuch.

Then why isn't your program working properly?  Obviously, you have something
wrong.

Quote:>A packet: a collection of data treated in a single lump. I push packets
>over the network. I retrieve packets from the network. Whether the
>underlying protocol is a stream-based protocol or not is irrelevant; I
>pass a header over which indicates the size of the data which follows
>it, which I then read from the network until it's done, producing a
>single *packet*.

But TCP doesn't care about your single lump - it'll stick it with others,
break it apart, and you have no way to prevent it from doing that.

I don't think that I can help you.

Alun.
~~~~

[Please don't email posters, if a Usenet response is appropriate.]
--
Texas Imperial Software   | Try WFTPD, the Windows FTP Server. Find us at

Cedar Park TX 78613-1419  | VISA/MC accepted.  NT-based sites, be sure to
Fax/Voice +1(512)258-9858 | read details of WFTPD Pro for XP/2000/NT.

 
 
 

socket send packets lost

Post by <n.. » Sun, 22 Dec 2002 05:44:15


I don't see how your code handles this case:

recv of 10 bytes
recv of 40 bytes <--- Got one packet
recv of 1 byte
recv of 59 bytes <-- Got one packet + 10 bytes
recv of 40 bytes <-- Got another packet

Which may be your problem.

 
 
 

socket send packets lost

Post by Simon Cook » Sun, 22 Dec 2002 06:51:42





> No, you're not.  TCP will retry, and retry and retry, until it gets
data
> through.  While it is retrying, any further outgoing data is stuck to
the back
> of any queued data.  You can't turn TCP into a packet protocol that
behaves
> like a packet protocol.  You can turn UDP into a reliable packet-based
> protocol, but you can't turn TCP into a packet protocol.  As soon as
data is
> resent, it'll be assembled with other data, and you'll find your later
packets
> are split.  You can't do what you're trying to do with TCP.

Who cares if they're split? From a software point of view, I have a
infinitely long pipe of incoming data, which I can read in random-length
chunks, at random times. I write out the length of a message/packet to
the stream. I then write out that many bytes of data. On the receiving
end, I read back the length, and keep reading until I get all of the
bytes I need, then I keep doing that.

I'm not talking about doing a recv of 2 bytes for the length and then a
recv of length number of bytes -- which is what you seem to be adamant
that I'm doing. I even provided the code so that we wouldn't be having
this particularly stupid debate, but you declined to read it.

So carry on arguing out of ignorance, and telling me not to do what I'm
already not doing.

Here's my app's architecture:

Buffer is filled with data from TCP socket as often as possible, until
the buffer is full or the socket is closed.

My packet/message reader reads data from the buffer, looping until it
has copied enough data from the buffer to form a single packet/message.

Would you feel more comfortable if I was using CR/LF terminated strings
instead of binary data here? It's exactly the same as saying "I'm
reading one line at a time from the socket" -- just a different choice
of terminating condition.

Quote:> >You certainly have the time and interest to make purient claims about
> >others' design choices, so I thought that given you were so certain
> >you'd have a sure-fire answer up your sleeve. Apparently not.

> If you don't want the advice, then don't take the advice.  But you
aren't
> going to like what you get when you try and treat TCP as something it
can't
> be.

If I wanted to use UDP, I'd use UDP. Frankly, the work involved in
making UDP work in a reliable fashion has already been done - it's
called TCP. So I'm using TCP. Thanks.

You're the one who seems to think that I'm trying to call send() with a
block of data, and then calling recv() on the other end expecting to get
that block - and only that block - back.

Quote:> >Packets, messages... whatever. I'm thinking in perfectly the right
terms
> >for TCP, thankyouverymuch.

> Then why isn't your program working properly?  Obviously, you have
something
> wrong.

Well done sherlock. The question is what? If this was easy to debug, I
would have done it several months ago. As it is, this is an intermittent
problem that only seems to be triggered under very specific conditions,
making it pretty damn hard to debug.

Quote:> >A packet: a collection of data treated in a single lump. I push
packets
> >over the network. I retrieve packets from the network. Whether the
> >underlying protocol is a stream-based protocol or not is irrelevant;
I
> >pass a header over which indicates the size of the data which follows
> >it, which I then read from the network until it's done, producing a
> >single *packet*.

> But TCP doesn't care about your single lump - it'll stick it with
others,
> break it apart, and you have no way to prevent it from doing that.

Good for TCP. It doesn't make a blind bit of difference to my app how
TCP does its thing under the hood; I'm still serializing data over the
connection, and that data is still getting deserialized perfectly at the
other end.

Quote:> I don't think that I can help you.

Not if you're going to keep claiming that I'm doing stuff that I'm not,
or that I'm using TCP incorrectly you're not, no.

Tell me: do you go around telling all of the SNMP software authors that
they can't possibly be sending messages the way they're doing it? Or the
HTTP guys -- "hell no, you can't provide a Content-Length field; TCP is
STREAM based! You can't possibly expect someone to be able to read a
chunk of data like that! It's a stream!?"

Simon

ps. I understand and know what you're saying. If you weren't so *y
pigheaded, you might realize that I'm actually talking about something
different here, and we're just using terms that are heavily overloaded
in the communications industry, thus the confusion.