Does anyone know of any machines, that have a floating-point compare

instruction, where that instruction is not exact? That is, some of

the value bits of the floating-point numbers are ignored. Or, if

the compare is done internally as a subtract, the subtract results in

an underflow (so two different numbers near the minimum normalized

value, when subtracted produce a subnormal number) that is then

flushed to zero, so compare equal.

---

Fred J. Tydeman Tydeman Consulting

+1 (775) 287-5904 Vice-chair of J11 (ANSI "C")

Sample C99+FPCE tests: ftp://jump.net/pub/tybor/

Savers sleep well, investors eat well, spenders work forever.

> Does anyone know of any machines, that have a floating-point compare

> instruction, where that instruction is not exact? That is, some of

> the value bits of the floating-point numbers are ignored. Or, if

> the compare is done internally as a subtract, the subtract results in

> an underflow (so two different numbers near the minimum normalized

> value, when subtracted produce a subnormal number) that is then

> flushed to zero, so compare equal.

double inexact_compare(double a, double b, int bits_to_skip)

{

// First the the regular difference:

double diff = b - a;

// Shift this difference down by the number of bits to ignore:

double scaled_diff = diff / ((double) (2 << bits_to_skip));

// Add it back to a, this will be a NOP if

// the difference is small enough

double a_delta = a + scaled_diff;

// Are the resulting values equal?

if (a == a_delta) return 0.0;

// Otherwise, return the actual difference:

return diff;

For a fast implementation, the division and shift operation should ofQuote:}

course be replaced with either a table lookup, or by sending the

required scale factor as the third parameter:

double inexact_compare(double a, double b, double scale_factor)

...

scaled_diff = diff * scale_factor;

Terje

--

"almost all programming can be viewed as an exercise in caching"

> Does anyone know of any machines, that have a floating-point compare

> instruction, where that instruction is not exact? That is, some of

> the value bits of the floating-point numbers are ignored. Or, if

> the compare is done internally as a subtract, the subtract results in

> an underflow (so two different numbers near the minimum normalized

> value, when subtracted produce a subnormal number) that is then

> flushed to zero, so compare equal.

Yup. But modern machines (IEEE) do not. The CDC * was one,

and in two ways. The machine had four compare instructions for the

registers that could hold a floating point number, "zero", "non-zero",

"positive, including +0", "negative, including -0". To compare you

subract, normalise and compare with 0. The normalisation would

indeed flush subnormal numbers to 0. But there was another problem.

The result of the subtraction had an exponent equal to the largest

of the two exponents of the operands. This could lead to the loss

of the least significant bit of the subtraction result when the

exponents of the operands differed by 1. And that bit *could* be

the only significant bit of the result. E.g. 2^48 (^ means to the

power here) and (2^48 - 1) would compare equal (48 bits mantissa).

For (in)equality you could always do a bitwise exclusive or of the two

operands, but that is not possible with other comparisons.

--

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131

home: bovenover 215, 1025 jn amsterdam, nederland; http://www.veryComputer.com/~dik/

:

: The easiest method would seem to be a subtract, scale, add and compare:

:

: double inexact_compare(double a, double b, int bits_to_skip)

: {

: // First the the regular difference:

: double diff = b - a;

:

: // Shift this difference down by the number of bits to ignore:

: double scaled_diff = diff / ((double) (2 << bits_to_skip));

:

This does not work for floating point..... or am I missing something

obvious?

Regards,

--buddy

--

Remove '.spaminator' and '.invalid' from email address

when replying.

> > Does anyone know of any machines, that have a floating-point compare

> > instruction, where that instruction is not exact? That is, some of

> > the value bits of the floating-point numbers are ignored. Or, if

> > the compare is done internally as a subtract, the subtract results in

> > an underflow (so two different numbers near the minimum normalized

> > value, when subtracted produce a subnormal number) that is then

> > flushed to zero, so compare equal.

> The easiest method would seem to be a subtract, scale, add and compare:

My hypothesis is all machines are exact when they do floating-point

compares.

I am looking for a counter-example, i.e., a machine that is inexact when

it

does compares (==, !=, <=, <, >, >=).

That subtract can underflow (with subnormal difference flushed to zero)Quote:> double inexact_compare(double a, double b, int bits_to_skip)

> {

> // First the the regular difference:

> double diff = b - a;

and it can overflow (which might be OK if an appropriate signed infinity

were produced; you know the numbers are very different).

---

Fred J. Tydeman Tydeman Consulting

+1 (775) 287-5904 Vice-chair of J11 (ANSI "C")

Sample C99+FPCE tests: ftp://jump.net/pub/tybor/

Savers sleep well, investors eat well, spenders work forever.

> Does anyone know of any machines, that have a floating-point compare

> instruction, where that instruction is not exact?

For example, the Rexx language has a 'NUMERIC FUZZ n' setting, which reduces

the working precision by n digits for compare operations. (This option was

a refinement on an idea from some earlier languages -- I forget which.)

Details are in ANSI Standard X3.274-1996 or any good Rexx book.

Mike Cowlishaw

> > Does anyone know of any machines, that have a floating-point compare

> > instruction, where that instruction is not exact? That is, some of

> > the value bits of the floating-point numbers are ignored. Or, if

> > the compare is done internally as a subtract, the subtract results in

> > an underflow (so two different numbers near the minimum normalized

> > value, when subtracted produce a subnormal number) that is then

> > flushed to zero, so compare equal.

>Yup. But modern machines (IEEE) do not. ...

is to support denormalised numbers optionally, in software, or not at

all. In all cases, this will usually lead to the comparison of two

different denormalised numbers treating them as equal.

I don't know of any current ones that are inconsistent, so you can

create such numbers only by creating the numbers in as bit patterns

(e.g. reading a binary file), but I haven't checked. Incidentally,

of of CDF's bugs is that it does not handle this case correctly,

or at least it used not to. There MAY be some systems that are

inconsistent in this respect.

[ In a consistent system, two numbers that compare equal will have

an equivalent effect when used in any calculation. ]

Regards,

Nick Maclaren.

...

> > > the compare is done internally as a subtract, the subtract results in

> > > an underflow (so two different numbers near the minimum normalized

> > > value, when subtracted produce a subnormal number) that is then

> > > flushed to zero, so compare equal.

> >

> >Yup. But modern machines (IEEE) do not. ...

>

> A fair proportion of 'IEEE' machines, aren't. A very common feature

> is to support denormalised numbers optionally, in software, or not at

> all. In all cases, this will usually lead to the comparison of two

> different denormalised numbers treating them as equal.

Yup, indeed, I can imagine that the processor treats denormal input

for operations as zero, and traps when the flush to zero flag is not

set. But such numbers can not be the result of an operation in this

case.

--

dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131

home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/

Don't be so sure. When I last checked around (nearly a decade back,Quote:> > A fair proportion of 'IEEE' machines, aren't. A very common feature

> > is to support denormalised numbers optionally, in software, or not at

> > all. In all cases, this will usually lead to the comparison of two

> > different denormalised numbers treating them as equal.

>Yup, indeed, I can imagine that the processor treats denormal input

>for operations as zero, and traps when the flush to zero flag is not

>set. But such numbers can not be the result of an operation in this

>case.

now), I found a fair number of systems that handled denormalised

operands and results inconsistently. At least one made the classic

mistake of prenormalising before multiplication and division, while

supporting denormalised numbers in other operations.

If I recall correctly, at least one other DID create denormalised

operands during formatted reading, but thereafter did not handle

them. And at least one did the same somewhere in the mathematical

library.

The reason for this whole mess is because almost no modern standard

(IEEE 754 not excepted) has a clear description of its scope,

mathematical model and objectives. IEEE 754 is better than some,

but more deceptive than most in that its inconsistencies are subtle.

Regards,

Nick Maclaren.

> :

> : The easiest method would seem to be a subtract, scale, add and compare:

> :

> : double inexact_compare(double a, double b, int bits_to_skip)

> : {

> : // First the the regular difference:

> : double diff = b - a;

> :

> : // Shift this difference down by the number of bits to ignore:

> : double scaled_diff = diff / ((double) (2 << bits_to_skip));

> :

> This does not work for floating point..... or am I missing something

> obvious?

bitsoze of an integer), but the casting to double of the integer shift

result is pretty non-optimal, as is the fp division that follows.

As I wrote in my post, this was only for illustration, a useful

implementation would either use a fp scale operation or a lookup table

to generate a suitable shifting multiplier.

Terje

--

"almost all programming can be viewed as an exercise in caching"

:> : // Shift this difference down by the number of bits to ignore:

:> : double scaled_diff = diff / ((double) (2 << bits_to_skip));

:> :

:>

(I said: )

:> This does not work for floating point..... or am I missing something

:> obvious?

:

: Well, it does work (modulo having a shift which is smaller than the

: bitsoze of an integer), but the casting to double of the integer shift

Yeah, I for some reason thought you were shifting the floationg point

number...must not've been reading very well :)

my apologies

Regards,

--buddy

1. base 2 floating point to base 10 floating point

I need an algorithm to convert a base 2 fp to a base 10 fp number. The

current approach I use is inaccurate (i.e. I make use of the fact that

2=10/5, so I divide the significand by 5 and sometimes multiply by 10 so

that I don't loose bits unnecessarily).

Is there a better way (other than having to do everything in BCD)?

Thanks!

2. Reset()

3. Pentium and IEEE-754 (was: Fixed Point vs. Floating-Point)

5. Fixed point/floating-point comparison?

7. decNumber -- an arbitrary-precision floating-point package

8. FA: Apple II (Infocom): Bordor Zone, and misc books, games, and utilities

9. Printing IEEE floating-point format

10. Floating-Point Exponential (A^X)

11. Convert 10 byte floating-point to 8 byte f-p

12. automatic conversion floating points to fixed points?

13. Floating-point hardware design

11 post • Page:**1** of **1**