(Nick Maclaren) writes:

> >In an upcoming hardware design I'm thinking about using a CPU without

> >a floating point unit. The application uses floating point numbers,

> >so I'll have to do software emulation. However, I can't seem to find

> >any information on how long these operations might take in software.

> >I'm trying to figure out how much processing power I need & choose an

> >appropriate CPU.

> >I have plently of info on MIPS ratings for the CPU's, and I figured

> >out how many MFLOPS my application needs, but how do I figure out how

> >many MIPS it takes to do so many MFLOPS?

> >Does anyone know of any info resources or methods?

> Lots of the latter, but the former are mostly in people's heads or

> on paper. Old paper.

> If you want to emulate a hardware floating-point format, you are

> talking hundreds of instructions or more, depending on how clever

> you are and the interface you use. If you merely want to implement

> floating-point in software, then you can get it down to tens of

> instructions. For example, holding floating-point numbers as a

> structure designed for software, like:

> struct (unsigned long mantissa, int exponent, unsigned char sign)

> is VASTLY easier than emulating IEEE. It's still thoroughly messy.

And speaking of emulating IEEE 754 float operations, speed and

code size go south in a big hurry if infinities, denormalized

numbers, NaNs, and rounding are handled properly. Add some

more adverse impact if double-precision float is implemented

instead of or in addition to the usual single-precision float.

Regardless, MFLOPS will be measured in fractions and quite

small fractions at that. Any relation between MIPS and MFLOPS

will be purely coincidental.