## Performance Indicator for Computational Complexity

### Performance Indicator for Computational Complexity

Hello everyone,
I want to compare the computational complexity of two algorithms
in my paper. All the computations here can be divided into four
categories: floating point add, floating point multiplication, fix
point add, and fix point multiplication.
One straightforward way to express the computational load for
each algorithm is to list the number of operations belonging to the
above four kinds, separately. Can I find a simpler way to finish the
same kind of task? More specifically, is it possible to use a single
performance indicator instead of having to list the 4 numbers?
Thank you very much for your help.

### Performance Indicator for Computational Complexity

If you're interested, here's a rough estimate:

fixed multiply=1
(flops equivalents)

This depends on how fast your floating point unit is compared to the
integer processing.  What kind of processor is in use for the testing?
Additionally, memory movement tends to take more time on modern
machines than processing.

Of course, if you're talking about algorithmic complexity in terms of
O(nlog(n)) vs O(n), that's something else.  All above operations are
constant time.  Calculating Algorithmic complexity is much more work.
If that's what you're looking for, I could give some suggestions on
that too.

> Hello everyone,
>       I want to compare the computational complexity of two algorithms
> in my paper. All the computations here can be divided into four
> categories: floating point add, floating point multiplication, fix
> point add, and fix point multiplication.
>       One straightforward way to express the computational load for
> each algorithm is to list the number of operations belonging to the
> above four kinds, separately. Can I find a simpler way to finish the
> same kind of task? More specifically, is it possible to use a single
> performance indicator instead of having to list the 4 numbers?
>       Thank you very much for your help.

### Performance Indicator for Computational Complexity

Thank you very much for you help, Steven.

Quote:> If you're interested, here's a rough estimate:

> fixed multiply=1
> (flops equivalents)

Can I find some references detailing the above estimate?

Quote:> This depends on how fast your floating point unit is compared to the
> integer processing.  What kind of processor is in use for the testing?
> Additionally, memory movement tends to take more time on modern
> machines than processing.

Since it is an academic paper, I don't want to have a specific target
hardware platform. In fact, the proposed algorithm can be implemented
with PC, Workstation, DSP, or ASICs.

Quote:> Of course, if you're talking about algorithmic complexity in terms of
> O(nlog(n)) vs O(n), that's something else.  All above operations are
> constant time.  Calculating Algorithmic complexity is much more work.
> If that's what you're looking for, I could give some suggestions on
> that too.

The algorithmic complexity for my algorithms are probably in the form
of  c1*O(f(n)) and c2*O(f(n)),i.e. the difference only lies in the
constant terms. What's more, since the algorithms involve an iterative
process adaptive to the contents of the data, it's difficult to give a
fixed estimate of c1 and c2, although I'm sure c1 < c2.  For this
reason, I have to resort to experimental results.

### Performance Indicator for Computational Complexity

> Thank you very much for you help, Steven.

> > If you're interested, here's a rough estimate:

> > fixed multiply=1
> > (flops equivalents)

> Can I find some references detailing the above estimate?

No, It's a rule of thumb from Carnegie Mellon CS.
It varies significantly based on the processor.
Fixed point add,subtract, and bitshift are all very fast (particularly
bit shift).  If you're going to do any divisions by factors of 2, do
them as a bit shift.  Your compiler may do that automatically, if you
make sure they're all factors of 2.

The fixed/float comparison varies significantly dependent on the
processor in usage.  Intel processor floating point operations tend to
be somewhat slow in comparison to other processors.

Floating point add and subtract are only marginally faster than
floating point multiply.  Thus, I lumped them together.  Fixed
multiply is comparable, though usually somewhat faster than floating
point multiply.

These estimates come with about a plus or minus 40% accuracy.  They're
crude.

Quote:> > This depends on how fast your floating point unit is compared to the
> > integer processing.  What kind of processor is in use for the testing?
> > Additionally, memory movement tends to take more time on modern
> > machines than processing.

> Since it is an academic paper, I don't want to have a specific target
> hardware platform. In fact, the proposed algorithm can be implemented
> with PC, Workstation, DSP, or ASICs.

Then about all you can say is that a fixed point add or bit shift is
going to be at least twice as fast as a multiply, and significantly
faster than a floating point operation.

Quote:> > Of course, if you're talking about algorithmic complexity in terms of
> > O(nlog(n)) vs O(n), that's something else.  All above operations are
> > constant time.  Calculating Algorithmic complexity is much more work.
> > If that's what you're looking for, I could give some suggestions on
> > that too.

> The algorithmic complexity for my algorithms are probably in the form
> of  c1*O(f(n)) and c2*O(f(n)),i.e. the difference only lies in the
> constant terms. What's more, since the algorithms involve an iterative
> process adaptive to the contents of the data, it's difficult to give a
> fixed estimate of c1 and c2, although I'm sure c1 < c2.  For this
> reason, I have to resort to experimental results.

That's about all you can use.  This really is processor dependent, you
can't make any precise assumptions.

### Performance Indicator for Computational Complexity

Quote:

> The algorithmic complexity for my algorithms are probably in the form
> of  c1*O(f(n)) and c2*O(f(n)),i.e. the difference only lies in the
> constant terms. What's more, since the algorithms involve an iterative
> process adaptive to the contents of the data, it's difficult to give a
> fixed estimate of c1 and c2, although I'm sure c1 < c2.  For this
> reason, I have to resort to experimental results.

There is no such thing as contant terms in asymptotic notation. Afunction
that takes 1,000,000 * n time and one that takes 0.00000000001 * n are both
O(n). The idea of asymptotic notation is to quantify how the complexity
grows with the size of the input. It does not tell you which is  faster for
a given input.

For instance consider two algorithms, one whose actual time is 0.000001 * n
and one that is 1000000 * log n. The first is O(n) and the second is O(log
n ). For many inputs the first will be faster. All asymptotic notation says
that if you have a large enough input the second algorithm will be faster.
--
Dale King

MPEG-1 video encoder?
For examples, the DCT coding complexity, motion estimation complexity,
variable length coding complexity, and etc.

Ken Cheung
--
--------------------------------------------------
__     __
/ _\___/_ \
\/       \/             Ken Cheung

|   *   |   Dept. of Electronic Engineering
\ { = } /    City University of Hong Kong
-------
--------------------------------------------------