Strange system time values through *getrusage* system call.

Strange system time values through *getrusage* system call.

Post by Pranab Banerj » Sat, 02 Dec 1995 04:00:00



Can anyone explain the following results that I am getting from "getrusage"
system function call on a DEC Alphastation running OSF1 V3.2.

I am trying to find the CPU time used to compute a function that does not
do any I/O at all inside it. It essentially scales a 2D image (floating point
array) by interpolation.

Here is the relevant code segment:

/**********************************************************/

#include <stdio.h>
#include <malloc.h>
#include <math.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/resource.h>

extern void my_func() ;

void main()
{
 long sys_time_1, sys_time_2 ;
 struct rusage *r_usage ;

 r_usage = (struct rusage *)malloc(sizeof(struct rusage)) ;
        .
        .
        .

 /* find the system time used so far and save it in sys_time_1 */
 if ( (status = getrusage(RUSAGE_SELF, r_usage)) != 0 )
 {
  perror("getrusage error") ;
 }
 else
 {
  sys_time_1 = r_usage->ru_stime.tv_usec ;
 }

 /* compute the function now */
 my_func(...) ;

 /* find the system time used till now and save it in sys_time_2 */

 if ( (status = getrusage(RUSAGE_SELF, r_usage)) != 0 )
 {
  perror("getrusage error") ;
 }
 else
 {
  sys_time_2 = r_usage->ru_stime.tv_usec ;
 }

 /* Find the system time used by the function as (sys_time_2 - sys_tine_1) */
 printf("System time used for interpolation = %d\n", (sys_time_2 - sys_time_1) ) ;

        .
        .
        .

Quote:}

/**********************************************************/

I was assuming that this would give me the actual CPU time used for the
computation of the function "my_func" in microseconds. But I seem to get
all sorts of values ranging from 3904 to as high as 11712 when I
ran the code 50 times.

Isn't this number supposed to be the same every time if it is truly
giving me the "system time" ( in terms of {(CPU cycles)/(clock freq)} ) ?
Or am I missing something?

Also, when I look at the manpage for "clock" system function, it says

"The clock() function is made obsolete by the getrusage() function; however,
 it is included for compatibility with older BSD programs."

I thought this meant I could still use "clock" instead of "getrusage"
and would get similar results (in microseconds - according to the man page).
But if I change the two lines (in the above block):

 sys_time_1 = r_usage->ru_stime.tv_usec ;
and
 sys_time_2 = r_usage->ru_stime.tv_usec ;

to

 sys_time_1 = clock() ;
and
 sys_time_2 = clock() ;

and then run the code, I get very different numbers ranging from 183326
to 216658! Much larger than the CPU times I got through "getrusage".

Any clarifications/suggestions/explanations would be highly appreciated.

Thank you.

It will very helpful if you please send me a copy of your reply at my direct

on the newsgroup.

Thank you.

Sincerely,

-Pranab K. Banerjee
 Crump Institute for Biological Imaging
 University of California, Los Angeles

 
 
 

Strange system time values through *getrusage* system call.

Post by Tim Goodw » Wed, 06 Dec 1995 04:00:00




>Can anyone explain the following results that I am getting from "getrusage"
>system function call on a DEC Alphastation running OSF1 V3.2.

You appear to have assumed that the CPU times will always be less
than one second; my guess is that they're not, and this is what is
giving you the strange results.

Quote:>  sys_time_1 = r_usage->ru_stime.tv_usec ;

Try changing this to...

    sys_time_1 = r_usage->ru_stime.tv_sec * 1e6 + r_usage->ru_stime.tv_usec;

[ I've added comp.unix.programmer to the newsgroups, and directed
followups there. ]

Tim.
--
Tim Goodwin   | "After all, what did Brunel, Watt, Boulton and
Unipalm PIPEX | Telford do that was complex?  I could have built
Cambridge, UK | the Great Western Railway on my own." -- Ian Batten