> Hi all,
> Please take a look at the following 3-line program.
> main(){
> float t=0.001;
> float u=1/t;
> printf("%f\n",u); file://u expected to be 1000
> }
> The actual output is 999.999939.
> On debugging using gdb, we find that t is actually assigned as
> 0.00100000005. This seems to be the cause of the discrepancy.
> What is the reason for this garbage at the tail of the fp number?
> I am writing a program which depends on the precise values of fp
> numbers and this problem is f****ing it all up!
> How do I get around it?
Use double for both variables... which will use 64 bits for them (not 32
like with float)
This happens because the 32 bit variable uses 1 bit for the sign of the
exponent, 1 bit for the sign of the actual number, 7 bits for the
exponent... and only 23 bit for the number (actualy 24 since the first 1
isn't stored).
0.001(dec) = 0.0000000001000001100010010011011101001...... etc (or
something).... (0.001 can not be stored correctly binary, if i didn't
miscaclulate)
which means the computer stores 0.1000001100010010011011101001.... x10^9
or + 0.10000...... (24 digits) x 10^(+9)
So, when you calculate something with it, you'll only have something like
0.009999999
Solutions... take more bits to store the variables or use other formulas
Hope this makes sense :)
UNiDoG