I've come across a problem that has me floored and I'm hoping someone
here can help me out. We're developing a LP solver in C++. The
development platforms include SunOS 4.1 and HP-UX. What we'd
like to do is time some of the algorithmic steps during execution, and
take different execution paths based on the timing information. We need
timings for code-fragments that may take on the order of a few hundred
microsec. to tens of millesec to execute. This is where we have the problem.
I've checked all the man pages + Rich Stevens' "Advanced Programming...",
and all of them say that the minimum resolution I can get is 1/100 of
a sec (1/60th of a sec. on SunOS). This doesn't seem to make sense
considering that the quartz clock on the computer can definitely give
much higher resolutions. What gives? Am I missing something very
obvious? Averaging of execution times over a zillion runs is not a
viable option, since we want to make decisions based on timing info.
obtained during a problem-solution, and the timings will vary widely
depending on the type of data.
I'd appreciate any suggestions/solutions/exlanations as to why this
can't be done. Please post or e-mail; I'll summarize.
Thanks much,
Gautham Kudva