> [...] I'm porting a unix application from
> Solaris to Linux. Everything works fine (the application is a superset
> of prolog) but for one thing. Every single time I try to do floating
> point operation it crashes with an exception, the operation being
> "denormalized". This property of floating point number does not appear
> on the solaris (sparc architecture). I can find the flags to mask this
> ieee exception, but when I do so, the program does not work anymore
> (looks like comparsion of floating point fail miserably, returning wrong
> results). Anyway, the application is ported to win NT and runs
> perfectly, so it is not a hardware specific problem. So my questions:
> 1) What does it mean to be denormalized ?
> 2) How can I fix a "denormalized floating point" ?
> 3) What mask am I supposed to use ?
> 4) Are there special compilation flags that I should pass to gcc to get
> a correct behavior ?
> 5) What is going on ?
representing numbers as 0.234*10^6 rather that 0.000234*10^9 or
0.0234*10^7. In hardware binary, this corresponds to making sure the
leftmost bit of the mantissa is always 1, which can then be assumed,
giving an extra bit of accuracy. However, operations on normalized
numbers can give a denormalized result and continuing calculations
with a denormalized operand without normalizing it will get you in
The FPU normally takes care of that, although there is a special
situation at the low end of the representable range (running out of
bits for the exponent).
My first guess would be that you have a stray pointer or something,
actually overwriting the values you need to compute on.
Try isolating the calculations that go wrong, put in some test
printouts and show us exactly what blows up. Also include
O__ ---- Peter Dalgaard Blegdamsvej 3
c/ /'_ --- Dept. of Biostatistics 2200 Cph. N
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918