Hello everyone,

I want to compare the computational complexity of two algorithms

in my paper. All the computations here can be divided into four

categories: floating point add, floating point multiplication, fix

point add, and fix point multiplication.

One straightforward way to express the computational load for

each algorithm is to list the number of operations belonging to the

above four kinds, separately. Can I find a simpler way to finish the

same kind of task? More specifically, is it possible to use a single

performance indicator instead of having to list the 4 numbers?

Thank you very much for your help.

If you're interested, here's a rough estimate:

fixed add/subtract=1/3

float add/subtract/multiply=1

fixed multiply=1

(flops equivalents)

This depends on how fast your floating point unit is compared to the

integer processing. What kind of processor is in use for the testing?

Additionally, memory movement tends to take more time on modern

machines than processing.

Of course, if you're talking about algorithmic complexity in terms of

O(nlog(n)) vs O(n), that's something else. All above operations are

constant time. Calculating Algorithmic complexity is much more work.

If that's what you're looking for, I could give some suggestions on

that too.

> Hello everyone,

> I want to compare the computational complexity of two algorithms

> in my paper. All the computations here can be divided into four

> categories: floating point add, floating point multiplication, fix

> point add, and fix point multiplication.

> One straightforward way to express the computational load for

> each algorithm is to list the number of operations belonging to the

> above four kinds, separately. Can I find a simpler way to finish the

> same kind of task? More specifically, is it possible to use a single

> performance indicator instead of having to list the 4 numbers?

> Thank you very much for your help.

Thank you very much for you help, Steven.

Can I find some references detailing the above estimate?Quote:> If you're interested, here's a rough estimate:

> fixed add/subtract=1/3

> float add/subtract/multiply=1

> fixed multiply=1

> (flops equivalents)

Since it is an academic paper, I don't want to have a specific targetQuote:> This depends on how fast your floating point unit is compared to the

> integer processing. What kind of processor is in use for the testing?

> Additionally, memory movement tends to take more time on modern

> machines than processing.

hardware platform. In fact, the proposed algorithm can be implemented

with PC, Workstation, DSP, or ASICs.

The algorithmic complexity for my algorithms are probably in the formQuote:> Of course, if you're talking about algorithmic complexity in terms of

> O(nlog(n)) vs O(n), that's something else. All above operations are

> constant time. Calculating Algorithmic complexity is much more work.

> If that's what you're looking for, I could give some suggestions on

> that too.

of c1*O(f(n)) and c2*O(f(n)),i.e. the difference only lies in the

constant terms. What's more, since the algorithms involve an iterative

process adaptive to the contents of the data, it's difficult to give a

fixed estimate of c1 and c2, although I'm sure c1 < c2. For this

reason, I have to resort to experimental results.

Thansks again for you reply.

> Thank you very much for you help, Steven.

> > If you're interested, here's a rough estimate:

> > fixed add/subtract=1/3

> > float add/subtract/multiply=1

> > fixed multiply=1

> > (flops equivalents)

> Can I find some references detailing the above estimate?

It varies significantly based on the processor.

Fixed point add,subtract, and bitshift are all very fast (particularly

bit shift). If you're going to do any divisions by factors of 2, do

them as a bit shift. Your compiler may do that automatically, if you

make sure they're all factors of 2.

The fixed/float comparison varies significantly dependent on the

processor in usage. Intel processor floating point operations tend to

be somewhat slow in comparison to other processors.

Floating point add and subtract are only marginally faster than

floating point multiply. Thus, I lumped them together. Fixed

multiply is comparable, though usually somewhat faster than floating

point multiply.

These estimates come with about a plus or minus 40% accuracy. They're

crude.

Then about all you can say is that a fixed point add or bit shift isQuote:> > This depends on how fast your floating point unit is compared to the

> > integer processing. What kind of processor is in use for the testing?

> > Additionally, memory movement tends to take more time on modern

> > machines than processing.

> Since it is an academic paper, I don't want to have a specific target

> hardware platform. In fact, the proposed algorithm can be implemented

> with PC, Workstation, DSP, or ASICs.

going to be at least twice as fast as a multiply, and significantly

faster than a floating point operation.

That's about all you can use. This really is processor dependent, youQuote:> > Of course, if you're talking about algorithmic complexity in terms of

> > O(nlog(n)) vs O(n), that's something else. All above operations are

> > constant time. Calculating Algorithmic complexity is much more work.

> > If that's what you're looking for, I could give some suggestions on

> > that too.

> The algorithmic complexity for my algorithms are probably in the form

> of c1*O(f(n)) and c2*O(f(n)),i.e. the difference only lies in the

> constant terms. What's more, since the algorithms involve an iterative

> process adaptive to the contents of the data, it's difficult to give a

> fixed estimate of c1 and c2, although I'm sure c1 < c2. For this

> reason, I have to resort to experimental results.

can't make any precise assumptions.

There is no such thing as contant terms in asymptotic notation. AfunctionQuote:> The algorithmic complexity for my algorithms are probably in the form

> of c1*O(f(n)) and c2*O(f(n)),i.e. the difference only lies in the

> constant terms. What's more, since the algorithms involve an iterative

> process adaptive to the contents of the data, it's difficult to give a

> fixed estimate of c1 and c2, although I'm sure c1 < c2. For this

> reason, I have to resort to experimental results.

that takes 1,000,000 * n time and one that takes 0.00000000001 * n are both

O(n). The idea of asymptotic notation is to quantify how the complexity

grows with the size of the input. It does not tell you which is faster for

a given input.

For instance consider two algorithms, one whose actual time is 0.000001 * n

and one that is 1000000 * log n. The first is O(n) and the second is O(log

n ). For many inputs the first will be faster. All asymptotic notation says

that if you have a large enough input the second algorithm will be faster.

--

Dale King

1. Computational complexity of MPEG-1 video encoder

Does anyone know about the figure about the computational complexity of

MPEG-1 video encoder?

For examples, the DCT coding complexity, motion estimation complexity,

variable length coding complexity, and etc.

Ken Cheung

--

--------------------------------------------------

__ __

/ _\___/_ \

\/ \/ Ken Cheung

| * | Dept. of Electronic Engineering

\ { = } / City University of Hong Kong

-------

--------------------------------------------------

2. MACPPP,ISDN & Motorala's Bitsurfr??

5. Computational complexity of DCT.

7. Search for message from 'Computational complexity : 1-bits in unsigned long' thread

9. Computational complexity for subband coding, DCT, and VQ

10. OH,TN, MS; Specialists; High Performance Computing; Computational Sciences

11. NASA SUMMER SCHOOL FOR HIGH PERFORMANCE COMPUTATIONAL PHYSICS

12. OO computational performance: PB vs. VB vs. Delphi

13. Clever Keyboard Indicator: Caps/Num/Scroll Locks and Ins indicators,OSD text,sound notifications

5 post • Page:**1** of **1**