TCP latency

TCP latency

Post by J Wuns » Fri, 26 Jul 1996 04:00:00



(Removed the NetBSD group per their request, they don't seem interested
in benchmarking. ;)  Moved the linux group to .misc, hope that's the
right one.)


>    kill(), umask() These are the best "null system call" benchmark
>                    choices I've seen.  I'd vary the mask in the umask
>                    one so that it was actually changing state.  The

Well, the kill() proposal is ``not quite null'' either, since it has
to walk the process table.  If the process table is linear, finding
process 1 will certainly be quick, but they are often (usually?)
hashed, so the result is probably as accurate as for the /dev/null
write: you've got a good measure to see whether some optimization on
your own system will make things better or worse, but the numbers
hardly compare to other systems (since they have no real meaning, just
like the ``write 1 byte to /dev/null'' is far off from real-world
usage patterns).

umask(umask(0777)) does indeed always change state, but the effect of
setting the umask is IMHO so that any attempt to `optimize' the
`doesn't change anything' case away will probably be a pessimization,
i.e. even setting it to the previous value is cheaper than launching
a comparision (and an expensive branch) first.

Quote:> I'm also willing (and interested) to find a different syscall that just
> measures trap overhead, but I haven't seen one yet that I really like.
> The getppid() may be the best out there, though, it's hard to cache
> that.  Thoughts?

Other people also suggested getppid().  I'd say either this or the
umask() were my favorites.

Btw., if you decide to have a ``null IO syscall'' test, why not also
comparing 1024 bytes to /dev/null as well?  This would allow an
estimation of scaling problems.

--
J"org Wunsch                                              Unix support engineer

 
 
 

TCP latency

Post by J Wuns » Sun, 04 Aug 1996 04:00:00



> >Unlike proprietary OS'es, Linux won't ever die.
> but what happens to linux after linus dies?

It will split into FreeLinux and NetLinux.

:-) :-)

(For those who don't know it: the disappearance of Bill Jolitz, the
father of 386BSD, was the reason why FreeBSD and NetBSD have been
established, independently of each other.)
--
cheers, J"org


Never trust an operating system you don't have sources for. ;-)

 
 
 

TCP latency

Post by James Rayna » Tue, 06 Aug 1996 04:00:00



[someone else wrote]

Quote:>> >Unlike proprietary OS'es, Linux won't ever die.

>> but what happens to linux after linus dies?

>It will split into FreeLinux and NetLinux.

No doubt to be followed (after a decent interval) by OpenLinux :-)

--
James Raynard, Edinburgh, Scotland

http://www.freebsd.org/~jraynard/

 
 
 

TCP latency

Post by Brian Some » Thu, 08 Aug 1996 04:00:00





: [someone else wrote]
: >> >Unlike proprietary OS'es, Linux won't ever die.
: >
: >> but what happens to linux after linus dies?
: >
: >It will split into FreeLinux and NetLinux.

: No doubt to be followed (after a decent interval) by OpenLinux :-)

So who plays Mr Monroy ?

--

Don't _EVER_ lose your sense of humour....

 
 
 

TCP latency

Post by J Wuns » Tue, 13 Aug 1996 04:00:00



> : >> but what happens to linux after linus dies?
> : >
> : >It will split into FreeLinux and NetLinux.

> : No doubt to be followed (after a decent interval) by OpenLinux :-)

> So who plays Mr Monroy ?

Perhaps the same troll who tried it in comp.unix.bsd.freebsd.misc some
weeks ago?  But oh well, he has to pick some postings of the genuine
JMJr first, in order to make less mistakes when copying him...

(For those Linuxians who don't know JMJr, he used to be the newsgroup
clown back in the 386BSD days.)

--
cheers, J"org


Never trust an operating system you don't have sources for. ;-)

 
 
 

1. TCP latency in the presense of RFC1323 (Was: Re: TCP latency)


   There are a number of problems with the way timers are implemented
   in TCP.  The first is granularity.  A slow/fast timeout has an
   inaccuarcy of up to 500ms/200ms.  That may cause inaccuaries to creep
   into round-trip estimates.

Shouldn't this be unimportant in the presence of the RFC1323 round
trip _measurement_ (instead of "educated guessing", which old BSD TCP
did)?

Which reminds me of: Did anybody ever analyze why some (old?) Linux
boxes would damage RFC1323 TCP packets routed through them? Only
change to pre-rfc1323 would be in the TCP part, so maybe it is a
problem in the os-specific part of the ppp implementation in Linux?
(Assuming that no other part of the networking code violates
layering).

Regards,
        Ignatios Souvatzis

--
        Ignatios Souvatzis
Cute quote: "You should also consider that the ST comes fully equipped with a

2. Difference between generic S3 virge and Diamond3D?

3. Shells (Was: TCP latency)

4. Amanda Backup of the localhost

5. TCP latency

6. Can you help with this SLIP/DIP problem?

7. What is TTCP (was TCP latency)

8. Linux on a Notebook?

9. How to achieve super low TCP latency?

10. Selective Hole NAK (Was: Re: TCP latency)

11. TCP Latency Issues

12. TCP latency