setsockopt( fh, IPPROTO_TCP, TCP_NODELAY, &op, sizeof( op ) )

setsockopt( fh, IPPROTO_TCP, TCP_NODELAY, &op, sizeof( op ) )

Post by Markus Zing » Wed, 03 Sep 1997 04:00:00



Hi all

I try to disable the Nagle algorithm with the following call:

setsockopt( fh, IPPROTO_TCP, TCP_NODELAY, &op, sizeof( op ) )

It does not seem to change anything. Is there probably something wrong
with the parameters?

Any help is greatly apreciated

TIA

Markus

 
 
 

setsockopt( fh, IPPROTO_TCP, TCP_NODELAY, &op, sizeof( op ) )

Post by Markus Zing » Thu, 04 Sep 1997 04:00:00



> Hi all

> I try to disable the Nagle algorithm with the following call:

> setsockopt( fh, IPPROTO_TCP, TCP_NODELAY, &op, sizeof( op ) )

> It does not seem to change anything. Is there probably something wrong
> with the parameters?

> Any help is greatly apreciated

In the meantime I checked that the call work's. My problem is, that my
client server software runs quite well between Windows machines of any
kind. As soon as one of the UNIX boxes available to me (HP-UX and Sun
Solaris) are servers, I have a strange behavior:

Answer's to Call's that take some more time on the server to complete
are answered with a extremly higher delay than the time they consume on
the server. On the other hand calls which are completed quickly are
answered soon. At least this is the impression I get while
experimenting. TCP_NODELAY does not change anything, since my client
server app is designed that the client wait's for the answer of each
request to the server.

The SUN server is a Ultrasparc based one and the only thing it is
running is my server. A singel 133 Mhz Pentium based NT server is about
15 times faster that the UNIX one. The operations the server performs on
the other hand are completed a lot faster on the SUN. The big delay
seems to be related to the socket code. Do I have to rewrite my code
using UDP? Any other ideas?

TIA

Markus

 
 
 

setsockopt( fh, IPPROTO_TCP, TCP_NODELAY, &op, sizeof( op ) )

Post by Andrew Giert » Thu, 04 Sep 1997 04:00:00


 Markus> The SUN server is a Ultrasparc based one and the only thing
 Markus> it is running is my server. A singel 133 Mhz Pentium based NT
 Markus> server is about 15 times faster that the UNIX one. The
 Markus> operations the server performs on the other hand are
 Markus> completed a lot faster on the SUN. The big delay seems to be
 Markus> related to the socket code. Do I have to rewrite my code
 Markus> using UDP? Any other ideas?

I'd start by looking at the code in the server that sends responses back
to the client. This code must execute as few write() calls as possible
(ideally, one only). Similar considerations apply to the code in the
client that sends requests to the server.

--
Andrew.

comp.unix.programmer FAQ: see <URL: http://www.erlenstar.demon.co.uk/unix/>
                           or <URL: http://www.whitefang.com/unix/>

 
 
 

setsockopt( fh, IPPROTO_TCP, TCP_NODELAY, &op, sizeof( op ) )

Post by Kni.. » Fri, 05 Sep 1997 04:00:00





>> I'd start by looking at the code in the server that sends responses
>> back to the client. This code must execute as few write() calls as
>> possible (ideally, one only). Similar considerations apply to the
>> code in the client that sends requests to the server.

> That's excactly what the code does. Each request consist's of one
> single send(). The same is true for the answer. Each request has as
> it's first argument the lenght of the request, so the server can call
> recv() until it has the whole request. (I asure that additional data
> that might be there is buffered so the next read operation first
> examines the buffer). The problem seems to be related to the time the
> server need's to fulfill the request itself. If this time is longer
> than a specific amout of time, then the answer is delayed additionaly
> by the socket code. What I'm intrested to know is: Is this possible or

  > am I totaly wrong? Is there something in the socket library that

Quote:> decides that a job is somhow of lower priority because the time
> between calls to the library is above a certain limit? And if so is
> there a way around this?> ?

I tried to E-mail you but could not resolve your address.

The idea of placing a message length count at the head of a network
message is a common and perfectly good technique. Furthermore, if you
are constructing messages and responses in this way and
transmitting/receiving using a single send/recv call, then disabling
Nagle will give optimum network performance.

So, your structure seems OK. I have just one thought, which may seem
obvious but having not seen your code, I'll ask anyway... "Is the header
(message length) being sent in network byte order?" If it isn't, you may
be trying to read too many characters during recv() and perhaps
(inadvertently) relying on a timeout which would cause a measurable
delay.

Otherwise, we may need to see some code.

--

 
 
 

setsockopt( fh, IPPROTO_TCP, TCP_NODELAY, &op, sizeof( op ) )

Post by Markus Zing » Sat, 06 Sep 1997 04:00:00



> I tried to E-mail you but could not resolve your address.

I'm sorry I forgot to note to remove the '.jblock' -> jam Blocker

Quote:> The idea of placing a message length count at the head of a network
> message is a common and perfectly good technique. Furthermore, if you
> are constructing messages and responses in this way and
> transmitting/receiving using a single send/recv call, then disabling
> Nagle will give optimum network performance.

I've been playing around a bit in the meantime. Turning of "Nagle" realy
hve some effect on the UNIX boxes. On the other hand it does not change
to much between the Windows machines.

Quote:> So, your structure seems OK. I have just one thought, which may seem
> obvious but having not seen your code, I'll ask anyway... "Is the header
> (message length) being sent in network byte order?" If it isn't, you may
> be trying to read too many characters during recv() and perhaps
> (inadvertently) relying on a timeout which would cause a measurable
> delay.

The byte order is correct. I found however a case when an answer is
built with more then one send() I changed it and it really speeded
things up. There is still a fact left I hardly can understand:

Windows PC Client (200 Mhz W95) to NT-Server (133 Mhz NT 4.0) -> best
performance result
Windows PC Client (200 Mhz W95) to SUN Ultrasparc Server about 25%
slower.

Excactly the same operations executed on the Servers themselves:
Windows NT Server (133 Mhz NT 4.0) now about 50% slower than the SUN
Server.

Can it really be, that a simple (133 Mhz NT 4.0) Server outperforms a
SUN regarding Socket I/O or do you think there is still something in my
code and I should further investigate it?

Thank's verry much so far for your previous answers

Markus