measure time in milli or micro sec accuracy

measure time in milli or micro sec accuracy

Post by Wuncheol He » Wed, 16 Dec 1998 04:00:00



Hello.
I want to measure time in milli or micro sec accuracy.
How can I do that in C code?
Thanks, Advance.
 
 
 

measure time in milli or micro sec accuracy

Post by Martin Recktenwal » Wed, 16 Dec 1998 04:00:00



> I want to measure time in milli or micro sec accuracy.

gettimeofday()

   Martin.

 
 
 

measure time in milli or micro sec accuracy

Post by Richard Jone » Wed, 16 Dec 1998 04:00:00


: Hello.
: I want to measure time in milli or micro sec accuracy.
: How can I do that in C code?

On the Pentium, use TSC for sub-microsecond accuracy
with very low overhead (takes 14 cycles to read it
on my box).

$ ../bin/calibrate-tsc
Calibrated Pentium timestamp counter: 265541309 Hz

Rich.

----------------------------------------------------------------------
//      -*- C++ -*-
//
//      CALIBRATE-TSC Copyright (C) 1998 Richard W.M. Jones.
//
//      This program is free software; you can redistribute it and/or modify
//      it under the terms of the GNU General Public License as published by
//      the Free Software Foundation; either version 2 of the License, or
//      (at your option) any later version.
//
//      This program is distributed in the hope that it will be useful,
//      but WITHOUT ANY WARRANTY; without even the implied warranty of
//      MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
//      GNU General Public License for more details.
//
//      You should have received a copy of the GNU General Public License
//      along with this program; if not, write to the Free Software
//      Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307,
//      USA.

// Compile with:
//   g++ -Wall -O2 -o calibrate-tsc calibrate-tsc.cc
// Tested with EGCS 1.0.2

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/time.h>
#include <unistd.h>
#include <iostream.h>

// Read the Pentium TSC.
static inline u_int64_t
rdtsc ()
{
  u_int64_t d;
  // Instruction is volatile because we don't want it to move
  // over an adjacent gettimeofday. That would ruin the timing
  // calibrations.
  __asm__ __volatile__ ("rdtsc" : "=&A" (d));
  return d;

Quote:}

// Compute 64 bit value / timeval (treated as a real).
static inline u_int64_t
operator / (u_int64_t v, timeval t)
{
  return u_int64_t (v / (double (t.tv_sec) + double (t.tv_usec) / 1000000.));

Quote:}

// Compute left - right for timeval structures.
static inline timeval
operator - (const timeval &left, const timeval &right)
{
  u_int64_t left_us = (u_int64_t) left.tv_sec * 1000000 + left.tv_usec;
  u_int64_t right_us = (u_int64_t) right.tv_sec * 1000000 + right.tv_usec;
  u_int64_t diff_us = left_us - right_us;
  timeval r = { diff_us / 1000000, diff_us % 1000000 };
  return r;

Quote:}

// Compute the positive difference between two 64 bit numbers.
static inline u_int64_t
diff (u_int64_t v1, u_int64_t v2)
{
  u_int64_t d = v1 - v2;
  if (d >= 0) return d; else return -d;

Quote:}

int
main (int argc, char *argv [])
{
  // Compute the period. Loop until we get 3 consecutive periods that
  // are the same to within a small error. The error is chosen
  // to be +/- 1% on a P-200.
  const u_int64_t error = 2000000;
  const int max_iterations = 20;
  int count;
  u_int64_t period,
    period1 = error * 2,
    period2 = 0,
    period3 = 0;
  for (count = 0; count < max_iterations; count++)
    {
      timeval start_time, end_time;
      u_int64_t start_tsc, end_tsc;

      gettimeofday (&start_time, 0);
      start_tsc = rdtsc ();
      sleep (1);
      gettimeofday (&end_time, 0);
      end_tsc = rdtsc ();

      period3 = (end_tsc - start_tsc) / (end_time - start_time);

      if (diff (period1, period2) <= error &&
          diff (period2, period3) <= error &&
          diff (period1, period3) <= error)
        break;

      period1 = period2;
      period2 = period3;
    }
  if (count == max_iterations)
    {
      cerr << "calibrate-tsc: gettimeofday or Pentium TSC not stable\n"
           << "  enough for accurate timing.\n";
      exit (1);
    }

  // Set the period to the average period measured.
  period = (period1 + period2 + period3) / 3;

  // Some Pentiums have broken TSCs that increment very
  // slowly or unevenly. My P-133, for example, has a TSC
  // that appears to increment at ~20kHz.
  if (period < 10000000)
    {
      cerr << "calibrate-tsc: Pentium TSC seems to be broken on this CPU.\n";
      exit (1);
    }

  cout << "Calibrated Pentium timestamp counter: " << period << " Hz\n";

Quote:}

----------------------------------------------------------------------

--
-      Richard Jones. Linux contractor London and SE areas.        -
-    Very boring homepage at: http://www.annexia.demon.co.uk/      -
- You are currently the 1,991,243,100th visitor to this signature. -
-    Original message content Copyright (C) 1998 Richard Jones.    -

 
 
 

measure time in milli or micro sec accuracy

Post by Maciej Golebiewsk » Wed, 16 Dec 1998 04:00:00


Hello,

Is this method reliable also for dual-CPU machines? Do both
CPUs have the same TSC value in any given time, so that the timing
won't get ruined if the process is swappend to another CPU between
measurements?

Maciej


> On the Pentium, use TSC for sub-microsecond accuracy
> with very low overhead (takes 14 cycles to read it
> on my box).