>Hello!
>Here's the output from a ping command done on a DGUX AViiON system:
>admin-rrw(81)# ping -snc2 -i 1 1.1.1.40
>PING 1.1.1.40 (1.1.1.40): 56 data bytes
>64 bytes from 1.1.1.40: icmp_seq=0 ttl=59 time=7.812 ms
>64 bytes from 1.1.1.40: icmp_seq=1 ttl=59 time=0 ms
>--- 1.1.1.40 ping statistics ---
>2 packets transmitted, 2 packets received, 0% packet loss
>round-trip min/avg/max = 0/3.906/7.812 ms
>admin-rrw(82)#
>See it measuring 3.906 and 7.812 milliseconds!!
>How does ping do that?
>The smallest unit of time I can find for any time related system call is
>10,000ms!
Not sure what you mean there, but certainly not 10,000 ms
(milliseconds).
Quote:>Any help with this, even guesses will be greatly appreciated!
Find a way to read up on the structures and funcitons in
/usr/include/sys/time.h, specifically getitimer() and
setitimer() and the related structures timeval and itimerval.
(Start with the man page for setitimer.)
But then also consider that precision and accuracy are two
distinctly separate issues. Just because a result is presented
to a precision of 1 microsecond does not mean it is accurate to
1 microsecond! For example, the second packet listed above is
unlikely to have actually returned in 0 milliseconds, regardless
of what the mechanism to measure it might be. Averaging that
with any number, even an accurate one, results in an error. If
the other number is 7.812 ms, but was obtained from a register
that is updated only every 152 microseconds, then the results are
all very useful, but the precision presented does not represent
the accuracy.
Floyd
--
Ukpeagvik (Barrow, Alaska)