fine-grained timing in UNIX

fine-grained timing in UNIX

Post by Gautham K. Kud » Thu, 11 Feb 1993 22:15:07



I've come across a problem that has me floored and I'm hoping someone
here can help me out. We're developing a LP solver in C++. The
development platforms include SunOS 4.1 and HP-UX. What we'd
like to do is time some of the algorithmic steps during execution, and
take different execution paths based on the timing information. We need
timings for code-fragments that may take on the order of a few hundred
microsec. to tens of millesec to execute. This is where we have the problem.
I've  checked all the man pages + Rich Stevens' "Advanced Programming...",
and all of them say that the minimum resolution I can get is 1/100 of
a sec (1/60th of a sec. on SunOS). This doesn't seem to make sense
considering that the quartz clock on the computer can definitely give
much higher resolutions. What gives? Am I missing something very
obvious? Averaging of execution times over a zillion runs is not a
viable option, since we want to make  decisions based on timing info.
obtained during a problem-solution, and the timings will vary widely
depending on the type of data.

I'd appreciate any suggestions/solutions/exlanations as to why this
can't be done. Please post or e-mail; I'll summarize.

Thanks much,
Gautham Kudva


 
 
 

fine-grained timing in UNIX

Post by Jim Re » Fri, 12 Feb 1993 01:10:23



   I've  checked all the man pages + Rich Stevens' "Advanced Programming...",
   and all of them say that the minimum resolution I can get is 1/100 of
   a sec (1/60th of a sec. on SunOS). This doesn't seem to make sense
   considering that the quartz clock on the computer can definitely give
   much higher resolutions. What gives? Am I missing something very
   obvious? Averaging of execution times over a zillion runs is not a
   viable option, since we want to make  decisions based on timing info.
   obtained during a problem-solution, and the timings will vary widely
   depending on the type of data.

Most UNIX systems have at least two clocks and there usually is little
or no synchronisation between them. One is the real-time clock which
generates very high priority interrupts at a typical rate of 60 or 100
Hz. It is this clock which gets used for things like process scheduling,
paging, profiling, process accounting, timeouts and so on. The second
clock is a time-of-day clock. Its main purpose is to set the date when
the system is booted. This clock then runs free. The system reports
the time of day by keeping count of the number of ticks of the
real-time clock since the UNIX epoch, though it is possible that it
may consult the TOD clock instead.

The quality of the time-of-day clock is variable. Some are cheap and
*. Others keep good time and/or provide very good resolution -
perhaps as low as microseconds. Depending on the OS, you might get
this information returned in a gettimeofday() call. Even if you do, it
is not wise to place too much emphasis on it. It takes time to pull
this information from the TOD clock and return it to the user. In
between these events, your process could service an unknown number
interrupts. It will also have the overheads of switching the processor
from kernel mode to user mode. You might also get a few context
switches taking place before your system call returns too. (Scheduling
decisions are made when a process is about to return to user mode -
for instance as a system call completes.) What this means is that the
data you get back from the system will be out of date by an
indeterminate amount of time. This is no good for fine-resolution time
measurements. It also helps explain why UNIX is not much good for
real-time processing.

So, if you're planning to measure the execution time of a routine, you
need to take the average of a number of calls so that you minimise the
impact that these kernel overheads will have on your measurement. If
the execution time is data dependent, repeat the timing measurements
for the typical data examples.

                Jim

 
 
 

1. Fine-grained user authorization

Most Unix systems use the passwd file or map for both user authentication
and user authorization.  That is to say, the login command and servers
such as pop, imap, ftp, tacacs, radius, etc. all require only the presence
of an entry in the passwd file or map to authorize a user for that service.
Even sendmail uses the passwd file or map as a list of local mailboxes.

We would like to have a single user database, but authorize users for
some services but not for others, and for some machines but not others.
For example, only administrators should be allowed to log into the
file servers, but a larger group of users should be allowed to log
into compute servers.  Some users should have mail and terminal server
access, but should not be allowed to log in anywhere.

I know there are clumsy ways to accomplish this now, with netgroups,
special shells, and separate passwd files, but is there a more elegant
way to do this?  Is anything like this being developed?  Is LDAP the
answer?

--
-Gary Mills-    -Unix Support-    -U of M Academic Computing and Networking-

2. Making .plan run a file

3. fine-grained ICQ search

4. are there clones of 100Mb/s network-cards

5. FREE BSD fine-grain timer

6. Hardware accelerated OpenGL

7. Fine Grained Timers and Portable?

8. Sr. Embedded Software Developer/UNIX - California PreIPO

9. Fine grain parallelism over Solaris 2/SparcCenter 2000 ?

10. Kernel hackers - does Linux have a fine grain timer?

11. Unix Plug-ins?

12. Remote unix (RS/6000) servers with dial-ins

13. Unix Libraries Ins/Outs