>Here's the output from a ping command done on a DGUX AViiON system:
>admin-rrw(81)# ping -snc2 -i 1 188.8.131.52
>PING 184.108.40.206 (220.127.116.11): 56 data bytes
>64 bytes from 18.104.22.168: icmp_seq=0 ttl=59 time=7.812 ms
>64 bytes from 22.214.171.124: icmp_seq=1 ttl=59 time=0 ms
>--- 126.96.36.199 ping statistics ---
>2 packets transmitted, 2 packets received, 0% packet loss
>round-trip min/avg/max = 0/3.906/7.812 ms
>See it measuring 3.906 and 7.812 milliseconds!!
>How does ping do that?
>The smallest unit of time I can find for any time related system call is
Not sure what you mean there, but certainly not 10,000 ms
Quote:>Any help with this, even guesses will be greatly appreciated!
Find a way to read up on the structures and funcitons in
/usr/include/sys/time.h, specifically getitimer() and
setitimer() and the related structures timeval and itimerval.
(Start with the man page for setitimer.)
But then also consider that precision and accuracy are two
distinctly separate issues. Just because a result is presented
to a precision of 1 microsecond does not mean it is accurate to
1 microsecond! For example, the second packet listed above is
unlikely to have actually returned in 0 milliseconds, regardless
of what the mechanism to measure it might be. Averaging that
with any number, even an accurate one, results in an error. If
the other number is 7.812 ms, but was obtained from a register
that is updated only every 152 microseconds, then the results are
all very useful, but the precision presented does not represent
Ukpeagvik (Barrow, Alaska)