socket send() doest not actually send...

socket send() doest not actually send...

Post by Manoj Ran » Thu, 26 Jun 1997 04:00:00



Hi,

I have written a client-server application using Windows sockets
(SOCK_STREAM, TCP), on Windows NT.

Client application needs to send a lot of data to server.
When client sends data slowly, server receives all the data. But when
client sends data very fast, data is lost, and server cannot receive it.
The send() call at client returns success, so client sends next data.

Is there any way by which client can find what happened to previous send(),
before sending next data.
Assuming that Server is too busy doing other computing and has no time to
receive data from Client, at the client side the send() function does not
return any kind of error. So Client keeps on sending data without knowing
that it is being lost.

IMPORTANT : Due to application architectural issues, I cannot send data and
wait for acknowledgment from server before sending next data.

I have already tried following.

1. At the Client end, before calling send() I call select() to check
whether socket is ready to send.
   But FD_ISSET macro always returns true.
   Documentation for select() says that "writability (FD_ISSET returned
TRUE) means that a send or sendto will complete without blocking"
   There is no guarantee that send() will actually send the data.
   This did not solve the problem.

2. After send() I save the timestamp. When next send() is called, if time
difference is less than 10 miliseconds, I call Sleep(10);
   This gives TCPIP at least 10 miliseconds time in between two send()
calls.
   THIS WORKS!
   I don't lose data, because at client end socket gets enough time to send
data.

But I really don't want to implement this Sleep(10) logic because when I
keep the Server very very busy doing computing before calling receive(),
again I start loosing data!
Sleep() is just a workaround, not a genuine fix.

In simplest form, to simulate the problem, at client end have a loop to
send data

for(i = 0; i < 1000; i++)
{

        if((send((SOCKET) MySocket, (const char FAR *) buffer,(int) LenthOfBuffer,
0)) == SOCKET_ERROR)
                break;

Quote:}

printf("count = %d", i)

Since send() always return success, I should somehow check whether earlier
send() was done or is TCPIP ready to accept next send()
I do not know at client side whether Server is receiveing the data or not!

PLEASE LET ME KNOW WHERE CAN I FIND INFORMATION SPECIFIC TO SOCKET
PROGRAMMING PROBLEMS/ISSUES/BUGS...

Thanks,
Manoj

 
 
 

socket send() doest not actually send...

Post by Allen Brunso » Thu, 26 Jun 1997 04:00:00



> I have written a client-server application using Windows
> sockets (SOCK_STREAM, TCP), on Windows NT.
<snip>
> Client application needs to send a lot of data to server.
> When client sends data slowly, server receives all the data.
> But when client sends data very fast, data is lost, and
> server cannot receive it.
<snip>
> 2. After send() I save the timestamp. When next send() is
> called, if time difference is less than 10 miliseconds, I
> call Sleep(10); This gives TCPIP at least 10 miliseconds
> time in between two send() calls.  THIS WORKS!

This is just a wild guess, but your problem might not have to do with
sockets, per se.

I'm writing a group of Win32 console-mode apps which control telephone calls
in real-time.  Early in testing I noticed that my apps seemed sluggish; they
weren't responding to keyboard input all that well and they ran "jerky."
Shortly thereafter I noticed that whenever one of my apps was running the CPU
utilization of the machine shot up to 100 percent and stayed there,
regardless of what my app was doing, and as soon as I exited it the CPU time
went down to a more normal 5 percent average.

Putting a Sleep(20) in the programs' main loop solved both problems!  Now CPU
usage barely flutters at all when I start up my apps and keyboard response
and screen output are far more snappy. I guess I wasn't giving NT and the
other apps enough time to "breathe."  I'd guess that keyboard input was
sluggish because whatever part of NT is responsible for herding keystrokes
was starved for CPU time.

You could have a similar problem: The TCP/IP stack might be starved for CPU
time, so it isn't able to send your packets in a timely fashion.  If I'm
right, then you wouldn't have to put a Sleep(10) after every single send(),
you could just put one in your main loop like I did.  It's worth trying.

Allen
--
http://www.concentric.net/~Brunsona/

Sometimes quantity becomes quality.  -- Garry Kasparov

 
 
 

socket send() doest not actually send...

Post by David Pete » Fri, 27 Jun 1997 04:00:00



> Putting a Sleep(20) in the programs' main loop solved both problems!  Now CPU
> usage barely flutters at all when I start up my apps and keyboard response
> and screen output are far more snappy. I guess I wasn't giving NT and the
> other apps enough time to "breathe."  I'd guess that keyboard input was
> sluggish because whatever part of NT is responsible for herding keystrokes
> was starved for CPU time.

This sounds like you're using non-blocking I/O. As an alternative to
Sleep why not use blocking I/O with read timeouts. You can use the
SO_RCVTIMEO socket option to set these, or use timeouts on select or
WSAASyncSelect calls. I believe that socket handles are waitable, so you
should be able use WaitForSingleEvent or WaitForMultipleEvents to wait
for eaither something to appear on your socket, or for a timeout to
occur.

--
David Peter               | We convoke the Nephilim,
Insignia Solutions plc    | and they come to us,

Voice: +44 (0)1494 453351 | http://www.insignia.com

 
 
 

socket send() doest not actually send...

Post by Felix Kasza [MV » Fri, 27 Jun 1997 04:00:00


Allen,

 > Putting a Sleep(20) in the programs' main loop solved both problems!

If you loop, you ought to look into other designs. Blocking until an
event is set is far more preferable.

Cheers,
Felix.

--
If you post a reply, kindly refrain from emailing it, too.

 
 
 

socket send() doest not actually send...

Post by Kiran Prabhaka » Fri, 27 Jun 1997 04:00:00



> Hi,

> I have written a client-server application using Windows sockets
> (SOCK_STREAM, TCP), on Windows NT.

> Client application needs to send a lot of data to server.
> When client sends data slowly, server receives all the data. But when
> client sends data very fast, data is lost, and server cannot receive it.
> The send() call at client returns success, so client sends next data.

> Is there any way by which client can find what happened to previous send(),
> before sending next data.
> Assuming that Server is too busy doing other computing and has no time to
> receive data from Client, at the client side the send() function does not
> return any kind of error. So Client keeps on sending data without knowing
> that it is being lost.

> IMPORTANT : Due to application architectural issues, I cannot send data and
> wait for acknowledgment from server before sending next data.

> I have already tried following.

> 1. At the Client end, before calling send() I call select() to check
> whether socket is ready to send.
>    But FD_ISSET macro always returns true.
>    Documentation for select() says that "writability (FD_ISSET returned
> TRUE) means that a send or sendto will complete without blocking"
>    There is no guarantee that send() will actually send the data.
>    This did not solve the problem.

> 2. After send() I save the timestamp. When next send() is called, if time
> difference is less than 10 miliseconds, I call Sleep(10);
>    This gives TCPIP at least 10 miliseconds time in between two send()
> calls.
>    THIS WORKS!
>    I don't lose data, because at client end socket gets enough time to send
> data.

> But I really don't want to implement this Sleep(10) logic because when I
> keep the Server very very busy doing computing before calling receive(),
> again I start loosing data!
> Sleep() is just a workaround, not a genuine fix.

> In simplest form, to simulate the problem, at client end have a loop to
> send data

> for(i = 0; i < 1000; i++)
> {

>         if((send((SOCKET) MySocket, (const char FAR *) buffer,(int) LenthOfBuffer,
> 0)) == SOCKET_ERROR)
>                 break;
> }
> printf("count = %d", i)

> Since send() always return success, I should somehow check whether earlier
> send() was done or is TCPIP ready to accept next send()
> I do not know at client side whether Server is receiveing the data or not!

> PLEASE LET ME KNOW WHERE CAN I FIND INFORMATION SPECIFIC TO SOCKET
> PROGRAMMING PROBLEMS/ISSUES/BUGS...

> Thanks,
> Manoj

Hi Manoj,

        This might sound familiar to you, but just in case...

        As you have pointed out it appears that the server is too busy,
probably processing the incoming data, and it is losing data. The
Winsock Service provider is probably receiving the packets at a lower
level and discarding them because it's buffers are full.

        To confirm that this is the case I would suggest that you stop invoking
the code which operates on the data. Retain the segment of code which
which checks to see if data is lost. If u observe that no data is being
lost now, I suggest that u move all the code which processes the data
into a thread and have a queue between your receive thread and the data
processing thread. This might ease things a bit.

        Hope this helps. Do let me know how it goes.

                        Good Luck!!
Kiran Prabhakar

 
 
 

socket send() doest not actually send...

Post by Allen Brunso » Sat, 28 Jun 1997 04:00:00



Quote:> If you loop, you ought to look into other designs. Blocking
> until an event is set is far more preferable.

Would I have to block on all possible events?  My program(s) have to do I/O
via TCP/IP with each other and with Telnet clients (you helped me get that
working as you might recall), across serial ports (a lot of programmable
large-scale telephone switches are accessed this way), and SQL Server.  Also
I have to process keyboard input events, program close events (when somebody
hits the X button on my console window), and I want to display my on-screen
clock once a second so the paranoids who maintain my programs know that
they're still working, and use my time updates to time the generation of
telephony traffic reports every now and then.

You'll no doubt be able to tell me which APIs to look up to investigate this,
right?

Thanks very much for taking the time to comment.  I like learning how to do
things the right way.

Allen

 
 
 

1. socket send() doest not return error....

Do you use a TCP or UDP socket ?

on a TCP socket, if the socket is in blocking mode, the send should block
until the server receive it, or at least the TCP protocol buffers it. In
non blocking mode, the send may fail with the error EWOULDBLOCK.

Also the client should close properly the socket (linger option) for the
tcp buffers to be received by the server.

on a UDP socket, datagram, it is to the application to do flow control and
error checking.



2. Understanding the boot process of Netware

3. Blocking a TCP send until the data has been sent, but not ack'd

4. Ganar Dinero Extra En Internet

5. Winsock / Socket Created / Packet not send

6. IRC Server information on Solaris

7. Socket not receiving packet sent by the driver

8. Newbie mouse question

9. Socket not receiving the packet send by the miniport driver

10. Winsock / Socket Created / Packet not send

11. Outlook & VB: How can you tell if automated message was actually sent?

12. Performance comparison between trsmitfile() using Microsoft Extensions and Windows Sockets 2 and readfile()/send() Windows Socket

13. sending on same socket between multiple sockets is thread-safe?