tcp send bigger data faster then small data

tcp send bigger data faster then small data

Post by Stephan Absmeie » Wed, 01 Oct 2003 04:56:33



Hi,

I did:

typedef long long s64;
inline s64 getRealTime() {
   s64 result;
   __asm__ __volatile__ ("rdtsc" : "=A" (result));
   return result;

Quote:}

s64 start64, end64;
tcpSender testSender2=tcpSender(argv[1],line,length);
testSender2.init(option);    //SO_LINGER,bind,connect
start64=getRealTime();
testSender2.work();
closeid=testSender2.end();   //close(socket)
end64 = getRealTime();

I measured the time, I did 10 rounds to get the average value. I use Red
Hat 9.1, g++ 2.96, the data is produced by random, 100 MBit Ethernet,
switched. I used SO_LINGER to wait till the queue has been send correctly.

I send 8000, 10240 and 80000 bytes over the network.
I did this several times cause I couldn't believe it, but it was all
time the same
it send 8000 bytes in 32 ms, 10240 bytes in 40.8 ms and
80000 bytes in 10.5 ms.

why is it faster to send 80000 bytes then less bytes? who can explain? I
can use any idea.

Thanks
Stephan

 
 
 

tcp send bigger data faster then small data

Post by Gisle Vane » Wed, 01 Oct 2003 05:44:32



> I send 8000, 10240 and 80000 bytes over the network.
> I did this several times cause I couldn't believe it, but it was all
> time the same
> it send 8000 bytes in 32 ms, 10240 bytes in 40.8 ms and
> 80000 bytes in 10.5 ms.

> why is it faster to send 80000 bytes then less bytes? who can explain? I
> can use any idea.

I think your timing method is flawed. Are you calculating and
printing the 64bit TSC-diff correctly, or maybe rdtsc wraps?
Some libs uses %lld and some uses %Ld to print a signed 64-bit
value. Why not use clock() instead? Plenty precision for what's you
are measuring.

--gv

 
 
 

tcp send bigger data faster then small data

Post by Stephan Absmeie » Wed, 01 Oct 2003 06:03:38


Gisle Vanem schrieb:


>>I send 8000, 10240 and 80000 bytes over the network.
>>I did this several times cause I couldn't believe it, but it was all
>>time the same
>>it send 8000 bytes in 32 ms, 10240 bytes in 40.8 ms and
>>80000 bytes in 10.5 ms.

>>why is it faster to send 80000 bytes then less bytes? who can explain? I
>>can use any idea.

> I think your timing method is flawed. Are you calculating and
> printing the 64bit TSC-diff correctly, or maybe rdtsc wraps?
> Some libs uses %lld and some uses %Ld to print a signed 64-bit
> value. Why not use clock() instead? Plenty precision for what's you
> are measuring.

> --gv

I changed s64 to clo9ck_t and getRealTime() to clock() and it always
returned 0 and one time 1000.
Did I something wrong?
 
 
 

tcp send bigger data faster then small data

Post by Stephan Absmeie » Wed, 01 Oct 2003 15:35:29


Gisle Vanem schrieb:


>>I send 8000, 10240 and 80000 bytes over the network.
>>I did this several times cause I couldn't believe it, but it was all
>>time the same
>>it send 8000 bytes in 32 ms, 10240 bytes in 40.8 ms and
>>80000 bytes in 10.5 ms.

>>why is it faster to send 80000 bytes then less bytes? who can explain? I
>>can use any idea.

> I think your timing method is flawed. Are you calculating and
> printing the 64bit TSC-diff correctly, or maybe rdtsc wraps?
> Some libs uses %lld and some uses %Ld to print a signed 64-bit
> value. Why not use clock() instead? Plenty precision for what's you
> are measuring.

> --gv

I tried gettimeofday and again: sending 8000 bytes needs 36 to 38
ms,10240 bytes needs 37 to 38 ms and 80000 bytes needs 8 to 47 ms, 14.9
ms in the average. But 800000 bytes need 68.6 to 85 ms.

double after;
timeval end;
gettimeofday(&end,NULL);
after=end.tv_sec+1.e-6*end.tv_usec;

  the result is like the result of getRealTime
8 out of 10 runs transfering 80000 bytes are faster then 10 ms and for
that faster then all the runs sending 8000 or 10240 bytes.

have anybody an idea?
Stephan

 
 
 

tcp send bigger data faster then small data

Post by Stephan Absmeie » Wed, 01 Oct 2003 16:31:29


Stephan Absmeier schrieb:

> Gisle Vanem schrieb:


>>> I send 8000, 10240 and 80000 bytes over the network.
>>> I did this several times cause I couldn't believe it, but it was all
>>> time the same
>>> it send 8000 bytes in 32 ms, 10240 bytes in 40.8 ms and
>>> 80000 bytes in 10.5 ms.

>>> why is it faster to send 80000 bytes then less bytes? who can explain? I
>>> can use any idea.

>> I think your timing method is flawed. Are you calculating and
>> printing the 64bit TSC-diff correctly, or maybe rdtsc wraps?
>> Some libs uses %lld and some uses %Ld to print a signed 64-bit
>> value. Why not use clock() instead? Plenty precision for what's you
>> are measuring.

>> --gv

> I tried gettimeofday and again: sending 8000 bytes needs 36 to 38
> ms,10240 bytes needs 37 to 38 ms and 80000 bytes needs 8 to 47 ms, 14.9
> ms in the average. But 800000 bytes need 68.6 to 85 ms.

> double after;
> timeval end;
> gettimeofday(&end,NULL);
> after=end.tv_sec+1.e-6*end.tv_usec;

>  the result is like the result of getRealTime
> 8 out of 10 runs transfering 80000 bytes are faster then 10 ms and for
> that faster then all the runs sending 8000 or 10240 bytes.

> have anybody an idea?
> Stephan

I did some more tests, the runs are seconds:

Bytes   fastest run     average run     slowest run
   8000 0.031323        0.0315113       0.03211
  10240 0.040979        0.0410652       0.041146
  20000         0.040621        0.0407288       0.040818
  30000         0.040002        0.0403257       0.040455
  35000         0.003506        0.0035568       0.003608
  40000         0.003849        0.0289824       0.039732
  50000         0.004638        0.0046604       0.004724
  80000         0.007648        0.0077132       0.007806
100000  0.008806        0.0097092       0.016487
200000  0.01764         0.0208155       0.048285
300000  0.02585         0.029469        0.058012
350000  0.030264        0.0304887       0.030698
400000  0.034348        0.041239        0.067274
500000  0.042902        0.0431682       0.043544
800000  0.068526        0.0735361       0.106607

it starts between 30000 and 35000 bytes to be faster then less bytes and
ends at 350000 bytes. 100 MBit means 35000 bytes need a minimum time of
0.0028 seconds (without overhead).

 
 
 

tcp send bigger data faster then small data

Post by Peter T. Breue » Wed, 01 Oct 2003 17:22:35



Quote:> I did some more tests, the runs are seconds:

That's good. Can you do some more graphing, please (and no tabs!!!
Fixing).

Quote:> Bytes   fastest run     average run     slowest run
>    8000 0.031323        0.0315113       0.03211
>   10240 0.040979        0.0410652       0.041146
>   20000 0.040621        0.0407288       0.040818
>   30000 0.040002        0.0403257       0.040455
>   35000 0.003506        0.0035568       0.003608
>   40000 0.003849        0.0289824       0.039732
>   50000 0.004638        0.0046604       0.004724
>   80000 0.007648        0.0077132       0.007806
> 100000  0.008806        0.0097092       0.016487
> 200000  0.01764         0.0208155       0.048285
> 300000  0.02585         0.029469        0.058012
> 350000  0.030264        0.0304887       0.030698
> 400000  0.034348        0.041239        0.067274
> 500000  0.042902        0.0431682       0.043544
> 800000  0.068526        0.0735361       0.106607

Yes - this sort of accords with a vague idea I have that over a small
number of fragments the NIC will wait to see if there are any more
incoming before telling the system it has something.

Quote:> it starts between 30000 and 35000 bytes to be faster then less bytes and
> ends at 350000 bytes. 100 MBit means 35000 bytes need a minimum time of
> 0.0028 seconds (without overhead).

What I don't get is how you can send 800KB in 0.07 seconds. That's
11.2MB/s. Oh well, 100BT. Yes, some NICs probably do the wait-and-see
trick.

Peter

 
 
 

tcp send bigger data faster then small data

Post by Stephan Absmeie » Wed, 01 Oct 2003 17:49:13


Peter T. Breuer schrieb:


>>I did some more tests, the runs are seconds:

> That's good. Can you do some more graphing, please (and no tabs!!!
> Fixing).

>>Bytes   fastest run     average run     slowest run
>>   8000 0.031323        0.0315113       0.03211
>>  10240 0.040979        0.0410652       0.041146
>>  20000 0.040621        0.0407288       0.040818
>>  30000 0.040002        0.0403257       0.040455
>>  35000 0.003506        0.0035568       0.003608
>>  40000 0.003849        0.0289824       0.039732
>>  50000 0.004638        0.0046604       0.004724
>>  80000 0.007648        0.0077132       0.007806
>>100000  0.008806        0.0097092       0.016487
>>200000  0.01764         0.0208155       0.048285
>>300000  0.02585         0.029469        0.058012
>>350000  0.030264        0.0304887       0.030698
>>400000  0.034348        0.041239        0.067274
>>500000  0.042902        0.0431682       0.043544
>>800000  0.068526        0.0735361       0.106607

> Yes - this sort of accords with a vague idea I have that over a small
> number of fragments the NIC will wait to see if there are any more
> incoming before telling the system it has something.

>>it starts between 30000 and 35000 bytes to be faster then less bytes and
>>ends at 350000 bytes. 100 MBit means 35000 bytes need a minimum time of
>>0.0028 seconds (without overhead).

> What I don't get is how you can send 800KB in 0.07 seconds. That's
> 11.2MB/s. Oh well, 100BT. Yes, some NICs probably do the wait-and-see
> trick.

That's one of my problems. That's not like tcp should act cause of slow
start.
This all was done with the default settings after insatllation. Turning off
the Nagle-Algorithm (TCP_NODELAY) speeds it up a little more. The fastest
was 0.06831 seconds for 800000 bytes.That means 93.6 MBit/s!! If I ping
the other host, it needs 0.00014 to 0.00015 seconds.

I go nuts on this.
Stephan

 
 
 

tcp send bigger data faster then small data

Post by Peter T. Breue » Wed, 01 Oct 2003 19:20:13



> Peter T. Breuer schrieb:

> >>I did some more tests, the runs are seconds:

> > That's good. Can you do some more graphing, please (and no tabs!!!
> > Fixing).

Can you please do that? The graphing, I mean. We need more data points.
And indicate how you are doing these tests, so that the results become
reproducible (and meaningful!).

Quote:> >>Bytes   fastest run     average run     slowest run
> >>   8000 0.031323        0.0315113       0.03211
> >>  10240 0.040979        0.0410652       0.041146
> >>  20000 0.040621        0.0407288       0.040818
> >>  30000 0.040002        0.0403257       0.040455
> >>  35000 0.003506        0.0035568       0.003608
> >>  40000 0.003849        0.0289824       0.039732
> >>  50000 0.004638        0.0046604       0.004724
> >>  80000 0.007648        0.0077132       0.007806
> >>100000  0.008806        0.0097092       0.016487
> >>200000  0.01764         0.0208155       0.048285
> >>300000  0.02585         0.029469        0.058012
> >>350000  0.030264        0.0304887       0.030698
> >>400000  0.034348        0.041239        0.067274
> >>500000  0.042902        0.0431682       0.043544
> >>800000  0.068526        0.0735361       0.106607

> > Yes - this sort of accords with a vague idea I have that over a small
> > number of fragments the NIC will wait to see if there are any more
> > incoming before telling the system it has something.
> That's one of my problems. That's not like tcp should act cause of slow
> start.

Eh? What I suggest is just due to the way some 100BT NICs behave.

Quote:> This all was done with the default settings after insatllation. Turning off
> the Nagle-Algorithm (TCP_NODELAY) speeds it up a little more. The fastest

What I suggest is nothing to do with the o/s.

Please perform the further experiments to confirm or deny. Detail your
measurement mode. To make valid measurements you will need to let the
NIC quiesce after each "packet". At least 1s delay.

You should then remeasure on a continuous stream, taking the average
speed on the stream. If I am right, all should converge there.

Peter

 
 
 

tcp send bigger data faster then small data

Post by Keith Wansbroug » Wed, 01 Oct 2003 20:49:02



Quote:> Please perform the further experiments to confirm or deny. Detail your
> measurement mode. To make valid measurements you will need to let the
> NIC quiesce after each "packet". At least 1s delay.

Also, don't always start with the smaller files and work your way up
to bigger ones - there are probably some startup costs associated with
the first transfer.  ARP comes to mind - are you sure that both hosts
have the other's MAC addrs in their ARP caches before you start
testing?

--KW 8-)
--

http://www.cl.cam.ac.uk/users/kw217/
University of Cambridge Computer Laboratory.

 
 
 

tcp send bigger data faster then small data

Post by Stephan Absmeie » Wed, 01 Oct 2003 22:17:31


Keith Wansbrough schrieb:


>>Please perform the further experiments to confirm or deny. Detail your
>>measurement mode. To make valid measurements you will need to let the
>>NIC quiesce after each "packet". At least 1s delay.

I don't have root rights. How should I quiesce the card?
There is no packet delay module installed.

Quote:

> Also, don't always start with the smaller files and work your way up
> to bigger ones - there are probably some startup costs associated with
> the first transfer.  ARP comes to mind - are you sure that both hosts
> have the other's MAC addrs in their ARP caches before you start
> testing?

> --KW 8-)

Hi,

this time upside down. I started with 800000 Bytes going to
8000 Bytes:

Bytes     fastest run   average run   slowest run
800000     0.068346     0.0688138     0.069734
500000     0.042749     0.046447      0.073162
400000     0.034251     0.0346192     0.035127
350000     0.030081     0.0334658     0.062391
300000     0.026015     0.0261624     0.026631
200000     0.017581     0.0177638     0.017923
100000     0.009186     0.0092719     0.009383
  80000     0.007103     0.0109007     0.0411
  50000     0.0048       0.0048566     0.005025
  40000     0.003765     0.0258131     0.040525
  35000     0.003641     0.0036926     0.003859
  30000     0.039624     0.039726      0.039826
  20000     0.038596     0.0392895     0.039463
  10240     0.037367     0.0401245     0.049999
   8000     0.03831      0.0386388     0.039061

nearly the same values as before but exactly the same picture.

It is done in a loop. There is a 1 second break between the
runs and a 5 second break between the different packets. The
socket has benn closed after each run, cause of time measure
methods.

arp can be a factor at the first run, but not later: I had a
look at /proc/net/arp, it wasn't there, I pinged the other
host, it was there, three minute later, the MAC address was
still there.

Stephan

 
 
 

tcp send bigger data faster then small data

Post by Peter T. Breue » Wed, 01 Oct 2003 22:45:47



> Keith Wansbrough schrieb:

> >>Please perform the further experiments to confirm or deny. Detail your
> >>measurement mode. To make valid measurements you will need to let the
> >>NIC quiesce after each "packet". At least 1s delay.
> I don't have root rights. How should I quiesce the card?

Eh? Just sleep for a while. 1s between sends.

Quote:> There is no packet delay module installed.

Eh? How are you performing your measurements if you cannot control
when your packet is sent? I presumed you were just doing a write to a
socket, and timing how long it takes!

Quote:> > Also, don't always start with the smaller files and work your way up
> > to bigger ones - there are probably some startup costs associated with
> > the first transfer.  ARP comes to mind - are you sure that both hosts
> > have the other's MAC addrs in their ARP caches before you start
> > testing?
> this time upside down. I started with 800000 Bytes going to
> 8000 Bytes:

Useless, unless you tell us what your experimental procedure is!

Quote:> Bytes     fastest run   average run   slowest run
> 800000     0.068346     0.0688138     0.069734
> 500000     0.042749     0.046447      0.073162
> 400000     0.034251     0.0346192     0.035127
> 350000     0.030081     0.0334658     0.062391
> 300000     0.026015     0.0261624     0.026631
> 200000     0.017581     0.0177638     0.017923
> 100000     0.009186     0.0092719     0.009383
>   80000     0.007103     0.0109007     0.0411
>   50000     0.0048       0.0048566     0.005025
>   40000     0.003765     0.0258131     0.040525
>   35000     0.003641     0.0036926     0.003859
>   30000     0.039624     0.039726      0.039826
>   20000     0.038596     0.0392895     0.039463
>   10240     0.037367     0.0401245     0.049999
>    8000     0.03831      0.0386388     0.039061
> nearly the same values as before but exactly the same picture.

But we (I) asked for more details. What you show is not fine grained
enough to give us an idea of what could be happening.

Quote:> It is done in a loop. There is a 1 second break between the
> runs and a 5 second break between the different packets. The

Runs? What is a "run"?

Quote:> socket has benn closed after each run, cause of time measure
> methods.

Well, I would be happier if you did not close it, but it makes no
difference at the packet level.

Now confirm that streaming continuously gives you the same speeds for
all, and you have the answer. The NIC is waiting for more on
the small stuff.

Peter

 
 
 

tcp send bigger data faster then small data

Post by Stephan Absmeie » Wed, 01 Oct 2003 23:34:44


Peter T. Breuer schrieb:

> In comp.os.linux.development.system Stephan Absmeier <m...@privacy.net> wrote:

>>Keith Wansbrough schrieb:

>>>"Peter T. Breuer" <p...@oboe.it.uc3m.es> writes:

>>>>Please perform the further experiments to confirm or deny. Detail your
>>>>measurement mode. To make valid measurements you will need to let the
>>>>NIC quiesce after each "packet". At least 1s delay.

>>I don't have root rights. How should I quiesce the card?

> Eh? Just sleep for a while. 1s between sends.

>>There is no packet delay module installed.

> Eh? How are you performing your measurements if you cannot control
> when your packet is sent? I presumed you were just doing a write to a
> socket, and timing how long it takes!

>>>Also, don't always start with the smaller files and work your way up
>>>to bigger ones - there are probably some startup costs associated with
>>>the first transfer.  ARP comes to mind - are you sure that both hosts
>>>have the other's MAC addrs in their ARP caches before you start
>>>testing?

>>this time upside down. I started with 800000 Bytes going to
>>8000 Bytes:

> Useless, unless you tell us what your experimental procedure is!

>>Bytes     fastest run   average run   slowest run
>>800000     0.068346     0.0688138     0.069734
>>500000     0.042749     0.046447      0.073162
>>400000     0.034251     0.0346192     0.035127
>>350000     0.030081     0.0334658     0.062391
>>300000     0.026015     0.0261624     0.026631
>>200000     0.017581     0.0177638     0.017923
>>100000     0.009186     0.0092719     0.009383
>>  80000     0.007103     0.0109007     0.0411
>>  50000     0.0048       0.0048566     0.005025
>>  40000     0.003765     0.0258131     0.040525
>>  35000     0.003641     0.0036926     0.003859
>>  30000     0.039624     0.039726      0.039826
>>  20000     0.038596     0.0392895     0.039463
>>  10240     0.037367     0.0401245     0.049999
>>   8000     0.03831      0.0386388     0.039061

>>nearly the same values as before but exactly the same picture.

> But we (I) asked for more details. What you show is not fine grained
> enough to give us an idea of what could be happening.

see the code at the end, should I repost my first post or
what are you interested in

>>It is done in a loop. There is a 1 second break between the
>>runs and a 5 second break between the different packets. The

> Runs? What is a "run"?

sending the data for one time

>>socket has benn closed after each run, cause of time measure
>>methods.

> Well, I would be happier if you did not close it, but it makes no
> difference at the packet level.

it's although to start again with slow start

> Now confirm that streaming continuously gives you the same speeds for
> all, and you have the answer. The NIC is waiting for more on
> the small stuff.

How big should this file be? 1MB, 10MB or more?

> Peter

the version using gettimeofday, I deleteted all the
getRealTime stuff to reduce the length and tried to
translate it to english. Hopefully you see what I did.All
the tests were made with option 0 so far. the receiver puts
the data in a buffer and dump it so far.

Hope it's not too long to post.

Thanks
Stephan

#include "abs_net.h" //constants like TCP_SERVER_PORT, KB...
#include <unistd.h>
#include <sys/time.h>
#include <netinet/tcp.h>
#include <sstream>
#include <stdlib.h>
#include <iostream>
using namespace std;

class tcpSender
{
private:
   int length;
   char *line;
   int mySocket, protokollAddr;
   struct sockaddr_in serverAddr, clientAddr;
   struct hostent *host;
   int i,paketlength;
   int window_size, mss, priority;

public:
   tcpSender(const char *server,char *lineIn,int lengthIn)
   {
     length=lengthIn;
     line=lineIn;
     /* get server IP address (no check if input is IP
address or DNS name */
     host = gethostbyname(server);

     if(host==NULL)
       {
        cout<<"tcpSender: unknown host "<<server<<endl;
        exit(1);
       }

   }

   void init(int option)
   {
     serverAddr.sin_family = host->h_addrtype;
     memcpy((char *)
&serverAddr.sin_addr.s_addr,host->h_addr_list[0],
host->h_length);
     serverAddr.sin_port = htons(TCP_SERVER_PORT);

     /* socket creation */
     mySocket = socket(AF_INET,SOCK_STREAM,0);
     if(mySocket<0)
       {
        cout<<"tcpSender: cannot open socket"<<endl;
        exit(1);
       }

     /*wait for all data beiing send,need for time measure */
     linger lingerwert;
     lingerwert.l_onoff=1;
     lingerwert.l_linger=32767;

setsockopt(mySocket,SOL_SOCKET,SO_LINGER,(char*)&lingerwert,sizeof(lingerwert));

     /*
     tcp optionen
     8     buffer
     4     priority
     2     nagle off
     1     mss
     0     none
     15 for all
     */
     if (option>=8)
       {
        /*  set SO_RCVBUF and SO_SNDBUF to 128*1024 Bytes */
        window_size=KB*1024;
        setsockopt(mySocket,SOL_SOCKET,SO_RCVBUF,(char*)&window_size,sizeof(window_size));
        setsockopt(mySocket,SOL_SOCKET,SO_SNDBUF,(char*)&window_size,sizeof(window_size));
        option -=8;
       }
     if (option >=4)
       {
        /*    Priority of the Queue           */
        priority=PRIORITAET;
        setsockopt(mySocket,SOL_SOCKET,SO_PRIORITY,(char*)&priority,sizeof(priority));
        option -= 4;
       }
     if (option >= 2)
       {
        /*    Nagle-Algorithm  off          */
        setsockopt(mySocket, SOL_TCP,
TCP_NODELAY,(char*)true,sizeof(true));
        option -=2;
       }
     if (option >= 1)
       {
        /*    try an other mss  */
        mss=MSS;
        setsockopt(mySocket, SOL_TCP,
TCP_MAXSEG,(char*)&mss,sizeof(mss));
        option -= 1;
       }

     /* bind any port */
     clientAddr.sin_family = AF_INET;
     clientAddr.sin_addr.s_addr = htonl(INADDR_ANY);
     clientAddr.sin_port = htons(0);

     protokollAddr = bind(mySocket, (struct sockaddr *)
&clientAddr, sizeof(clientAddr));
     if(protokollAddr<0)
       {
        cout<<"tcpSender: cannot bind TCP port
"<<clientAddr.sin_port<<endl;
        exit(1);
       }

     /* connect to server */
     protokollAddr = connect(mySocket, (struct sockaddr *)
&serverAddr, sizeof(serverAddr));
     if(protokollAddr<0)
       {
        cout<<"tcpSender: cannot connect to Socket"<<endl;
        exit(1);
       }

   }

   void work()
   {
     protokollAddr = send(mySocket, line, length, 0);
     if(protokollAddr<0)
      {
        cout<<"tcpSender: cannot send data"<<endl;
        close(mySocket);
        exit(1);
      }

   }

   int end()
   {
     return close(mySocket);
   }

};

int main(int argc, char *argv[])
{
   int length, numberOfRuns,option, closeid;
   timeval start, end;
   /* check command line args */
   if(argc!=4)
     {
       cout<<"usage: tcpSender <server>  number_of_runs
option"<<endl;
       cout<<"example: tcpSender blowfish 10 15"<<endl;
       cout<<"options :"<<endl;
       cout<<"8     sendingbuffer "<<KB<<"kb"<<endl;
       cout<<"4     priority "<<PRIORITAET<<endl;
       cout<<"2     Nagle algorithm off"<<endl;
       cout<<"1     mss "<<MSS<<endl;
       cout<<"0     default"<<endl;
       exit(1);
     }

   numberOfRuns=atoi(argv[2]);
   double before,after,duration[numberOfRuns],complete,min,max;
   option=atoi(argv[3]);
   char line[800000];
// fill the array with random chars
   for (int aa=0;aa<800000;)
     {
       switch(rand() % 16)
        {
        case 0:
          line[aa]='a';
          break;
        case 1:
          line[aa]='b';
          break;
        case 2:
          line[aa]='c';
          break;
        case 3:
          line[aa]='d';
          break;
        case 4:
          line[aa]='e';
          break;
        case 5:
          line[aa]='f';
          break;
        case 6:
          line[aa]='g';
          break;
        case 7:
          line[aa]='h';
          break;
        case 8:
          line[aa]='i';
          break;
        case 9:
          line[aa]='j';
          break;
        case 10:
          line[aa]='k';
          break;
        case 11:
          line[aa]='l';
          break;
        case 12:
          line[aa]='m';
          break;
        case 13:
          line[aa]='n';
          break;
        case 14:
          line[aa]='o';
          break;
        default:
          line[aa]='p';
          break;
        }
       aa++;
     }
// try different length
   for(int testrun=0;testrun<15;)
     {
       switch(testrun)
        {
        case 0:
          length=800000;
          break;
        case 1:
          length=500000;
          break;
        case 2:
          length=400000;
          break;
        case 3:
          length=350000;
          break;
        case 4:
          length=300000;
          break;
        case 5:
          length=200000;
          break;
        case 6:
          length=100000;
          break;
        case 7:
          length=80000;
          break;
        case 8:
          length=50000;
          break;
        case 9:
          length=40000;
          break;
        case 10:
          length=35000;
          break;
        case 11:
          length=30000;
          break;
        case 12:
          length=20000;
          break;
        case 13:
          length=10240;
          break;
        default:
          length=8000;
          break;
        }
   testrun++;
   complete=min=max=0.0;
   cout<<"option: "<<option<<"    data size: "<<length<<endl;
   cout<<endl;
//send data, measure time
   for (int test=0;test<numberOfRuns;)
     {
       tcpSender testSender=tcpSender(argv[1],line,length);
       testSender.init(option);
       gettimeofday(&start,NULL);
       testSender.work();
       closeid=testSender.end();
       gettimeofday(&end,NULL);

       before=start.tv_sec+1.e-6*start.tv_usec;
       after=end.tv_sec+1.e-6*end.tv_usec;
       duration[test]=after-before;
       cout<<"time for run "<<test<<" : "<<duration[test]<<"
sec"<<endl;
       complete +=duration[test];
       if (min==0)
        min=duration[test];
       else
        if (duration[test]<min)
          min=duration[test];
       if (max<duration[test])
        max=duration[test];

       test++;
       //break before next run, same length
       sleep(1);
     }

   cout<<endl;
   cout<<"average time: "<<complete/numberOfRuns<<" sec "<<endl;
   cout<<"fastest run: "<<min<<" sec"<<endl;
   cout<<"slowest run: "<<max<<" sec"<<endl;
   cout<<endl;
   cout<<endl;
   //break before new length
   sleep(5);
     }
   return 1;

- Show quoted text -

}

 
 
 

tcp send bigger data faster then small data

Post by Rick Jone » Thu, 02 Oct 2003 04:15:42



Quote:> Yes - this sort of accords with a vague idea I have that over a small
> number of fragments the NIC will wait to see if there are any more
> incoming before telling the system it has something.

Some NIC/driver combos (particularly Gigabit, but perhaps some 100BT)
may not have the "interrupt avoidance" or "coalescing" parms set
terribly well for anyting other than large bulk transfers.  Such
situations can often be uncovered with a single-byte, netperf TCP_RR
test:

$ netperf -t TCP_RR -H <remote> -l <time> -- -r 1

rick jones
--
portable adj, code that compiles under more than one compiler
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to raj in cup.hp.com  but NOT BOTH...

 
 
 

tcp send bigger data faster then small data

Post by Stephan Absmeie » Thu, 02 Oct 2003 06:39:00


Rick Jones schrieb:


>>Yes - this sort of accords with a vague idea I have that over a small
>>number of fragments the NIC will wait to see if there are any more
>>incoming before telling the system it has something.

> Some NIC/driver combos (particularly Gigabit, but perhaps some 100BT)
> may not have the "interrupt avoidance" or "coalescing" parms set
> terribly well for anyting other than large bulk transfers.  Such
> situations can often be uncovered with a single-byte, netperf TCP_RR
> test:

> $ netperf -t TCP_RR -H <remote> -l <time> -- -r 1

I have no idea what this is saying to me. Can anybody help
and explain to me? Thanks.
For me it looks like it is OK. I did the first one with -r
80000 and the Trans Rate per sec was 56.37. I did the second
(-l 10) with -r 80000. Result: 71.34

./netperf -t TCP_RR -H 192.169.1.110 -l 1 -- -r 1

TCP REQUEST/RESPONSE TEST to 192.169.1.110
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate
bytes  Bytes  bytes    bytes   secs.    per sec

16384  87380  1        1       0.99     9974.64
16384  87380

  ./netperf -t TCP_RR -H 192.169.1.110 -l 10 -- -r 1

TCP REQUEST/RESPONSE TEST to 192.169.1.110
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate
bytes  Bytes  bytes    bytes   secs.    per sec

16384  87380  1        1       9.99     9881.40
16384  87380

  ./netperf -t TCP_RR -H 192.169.1.110 -l 100 -- -r 1

TCP REQUEST/RESPONSE TEST to 192.169.1.110
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size     Size    Time     Rate
bytes  Bytes  bytes    bytes   secs.    per sec

16384  87380  1        1       100.00   9899.73
16384  87380

./netperf -t TCP_STREAM -H 192.169.1.110 -l 1 -- -r 1

TCP STREAM TEST to 192.169.1.110
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

  87380  16384  16384    1.00       93.76

  ./netperf -t TCP_STREAM -H 192.169.1.110  -- -r 1

TCP STREAM TEST to 192.169.1.110
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

  87380  16384  16384    10.00      93.85

 
 
 

tcp send bigger data faster then small data

Post by Stephan Absmeie » Thu, 02 Oct 2003 17:47:41


Rick Jones schrieb:


>>Yes - this sort of accords with a vague idea I have that over a small
>>number of fragments the NIC will wait to see if there are any more
>>incoming before telling the system it has something.

> Some NIC/driver combos (particularly Gigabit, but perhaps some 100BT)
> may not have the "interrupt avoidance" or "coalescing" parms set

Is there a possibility to change this settings? If yes: how?
Quote:> terribly well for anyting other than large bulk transfers.  Such
> situations can often be uncovered with a single-byte, netperf TCP_RR
> test:

> $ netperf -t TCP_RR -H <remote> -l <time> -- -r 1

> rick jones

 
 
 

1. Sending XML data to Apache web server and capturing this sent data from the Apache server

Hi all!
Can anyone tell me how I can sent XML data to an Apache web server and
how I can capture and process XML data sent to my Apache server.
Thanks alot for help in advance
WolfgangA

2. File Date Help

3. !!DDE->TCP/IP, REAL TIME DATA PUMP, Trans.DDE App.DATA via TCP/IP

4. millenium g450

5. scanner

6. RH and Dell Precision 610s