Help with some basic questions about Log[] and x^y

Help with some basic questions about Log[] and x^y

Post by Terr » Tue, 09 Dec 2003 14:17:15



Hi,

I'm a programmer who finds himself in need of some guidance in
navigating the treacherous world of error analysis. Basically I have a
calculation that involves powering, additions, multiplications and
logs that I need to do an error analysis for. The additions and
multiplications are easy enough, but I'm at a loss when it comes to
the powering and logarithms.

So far my research has led me to the following understanding:

- IEEE 754 is the pre* standard that governs the accuracy of
arithmetic operations in modern machines
- The 754 standard stipulates that basic arithmetic operations like
additions and multiplications must be exactly rounded but it says
nothing about transcendentals
- For transcendentals and powering there seem to be many algorithms

So here are some of the questions I'm struggling with:
- Given that I'm currently only using WIntel hardware, is there a
* transcendental and powering algorithm in use?
- Is there a paper that describes these algorithms along with an error
analysis?
- If not in the above paper, where might I find out what the absolute
worst error is for a Log[x] and x^y operation where both x and y are
floating point numbers?

Finally, is there a good introductory text on error analysis
specifically focusing on powering and logs suitable for a math
hobbyist such as myself? Or is that a stupid question?

Any and all help will be very much appreciated!

terry
(terryisnow at yahoo.com)

 
 
 

Help with some basic questions about Log[] and x^y

Post by Terje Mathise » Tue, 09 Dec 2003 18:47:36



> - IEEE 754 is the pre* standard that governs the accuracy of
> arithmetic operations in modern machines

Right.

Quote:> - The 754 standard stipulates that basic arithmetic operations like
> additions and multiplications must be exactly rounded but it says
> nothing about transcendentals
> - For transcendentals and powering there seem to be many algorithms

Right.

Quote:

> So here are some of the questions I'm struggling with:
> - Given that I'm currently only using WIntel hardware, is there a
> * transcendental and powering algorithm in use?

x86 makes this quite easy, since they use 80-bit floats internally, and
for all trancendental calculations.

In particular, for in-range inputs, all trancendental functions on
Pentium (P5 and later) Intel cpus will deliver results that are within
1.0 ulp in 80-bit format.

This practially/normally means that as long as you can do your
calculations in this format, the results will be within 0.5 ulp after
rounding to double precision.

Quote:> - Is there a paper that describes these algorithms along with an error
> analysis?

There's an Intel paper (pdf) written around the time the Pentium was
released, most probably by Peter Tang (who designed the algorithms).

Terje

--

"almost all programming can be viewed as an exercise in caching"

 
 
 

Help with some basic questions about Log[] and x^y

Post by glen herrmannsfeld » Wed, 10 Dec 2003 05:40:08



> I'm a programmer who finds himself in need of some guidance in
> navigating the treacherous world of error analysis. Basically I have a
> calculation that involves powering, additions, multiplications and
> logs that I need to do an error analysis for. The additions and
> multiplications are easy enough, but I'm at a loss when it comes to
> the powering and logarithms.

(snip)

Quote:> Finally, is there a good introductory text on error analysis
> specifically focusing on powering and logs suitable for a math
> hobbyist such as myself? Or is that a stupid question?

The books I know have titles like "Statistical Treatment of Experimental
Data."   Mostly the idea is that all measured quantities have an
uncertainty to them, and you need to take that into account when working
with such data.

Now, you ask a different question, related to the effect of computer
arithmetic on such data.   The first answer is that your computer
arithmetic should be accurate enough such that experimental error (in
the input data) is larger than the computational error.

Consider some basic operations on data with uncertainties:

(a +/- da) + (b +/- db) = (a + b) +/- sqrt( da**2 + db**2 )

assuming that da and db are statistically independent, in other words,
uncorelated, and it also assumes they are gaussian distributed

(a +/- da) * (b +/- db) = (a*b) +/- sqrt( (a*db)**2 + (b*da)**2)

For any function f(x),  f(x +/- dx) = f(x) +/- dx * f'(x)

For functions of more than one variable, the final uncertainty is the
sqrt() of the sum of the squares of the individual uncertainties.

sin(x +/- dx) = sin(x) +/- dx * cos(x)

log(x +/- dx) = log(x) +/- dx/x

exp(x +/- dx) = exp(x) +/- dx * exp(x)

x**y=exp(y*log(x))

(x +/- dx) ** (y) = exp(y*log(x +/- dx)) = exp(y*log(x) +/- y*dx/x)
                   = x**y * exp( +/- y*dx/x) = x**y +/- x**y*y*dx/x

x ** (y +/- dy) = exp((y +/- dy) * log(x)) = x**y +/- dy*log(x)*x**y

(x +/- dx) ** (y +/- dy) = x**y
             +/- x**y*sqrt((y*dx/x)**2+(dy*log(x))**2)

All are done while typing them in, so I could have made mistakes in
them.  You can check them yourself to be sure they are right.

-- glen

 
 
 

Help with some basic questions about Log[] and x^y

Post by Nick Maclar » Wed, 10 Dec 2003 06:13:29





>> I'm a programmer who finds himself in need of some guidance in
>> navigating the treacherous world of error analysis. Basically I have a
>> calculation that involves powering, additions, multiplications and
>> logs that I need to do an error analysis for. The additions and
>> multiplications are easy enough, but I'm at a loss when it comes to
>> the powering and logarithms.

>> Finally, is there a good introductory text on error analysis
>> specifically focusing on powering and logs suitable for a math
>> hobbyist such as myself? Or is that a stupid question?

>The books I know have titles like "Statistical Treatment of Experimental
>Data."   Mostly the idea is that all measured quantities have an
>uncertainty to them, and you need to take that into account when working
>with such data.

>Now, you ask a different question, related to the effect of computer
>arithmetic on such data.   The first answer is that your computer
>arithmetic should be accurate enough such that experimental error (in
>the input data) is larger than the computational error.

I think that he is asking about a different sort of error analysis.
I.e. the sort that if done (for linear algebra) in Wilkinson and
Reinsch.  He needs to find a good numerical analysis textbook, of
the type that attempts to explain the mathematics.  Ask on
sci.math.num-analysis.

Quote:>Consider some basic operations on data with uncertainties:

>(a +/- da) + (b +/- db) = (a + b) +/- sqrt( da**2 + db**2 )

>assuming that da and db are statistically independent, in other words,
>uncorelated, and it also assumes they are gaussian distributed

Actually, no, it doesn't.  That formula applies to ANY independent
distributions with variances - and the distributions don't even have
to be the same.  Not all distributions have variances, of course.

Quote:>(a +/- da) * (b +/- db) = (a*b) +/- sqrt( (a*db)**2 + (b*da)**2)

>For any function f(x),  f(x +/- dx) = f(x) +/- dx * f'(x)

> . . .

Approximately, and only when da and db are small relative to |a| and
|b|.

Regards,
Nick Maclaren.

 
 
 

Help with some basic questions about Log[] and x^y

Post by Dik T. Winte » Wed, 10 Dec 2003 09:45:24




 > >> Finally, is there a good introductory text on error analysis
 > >> specifically focusing on powering and logs suitable for a math
 > >> hobbyist such as myself? Or is that a stupid question?
 > >
 > >The books I know have titles like "Statistical Treatment of Experimental
 > >Data."   Mostly the idea is that all measured quantities have an
 > >uncertainty to them, and you need to take that into account when working
 > >with such data.
 > >
 > >Now, you ask a different question, related to the effect of computer
 > >arithmetic on such data.   The first answer is that your computer
 > >arithmetic should be accurate enough such that experimental error (in
 > >the input data) is larger than the computational error.
 >
 > I think that he is asking about a different sort of error analysis.
 > I.e. the sort that if done (for linear algebra) in Wilkinson and
 > Reinsch.  He needs to find a good numerical analysis textbook, of
 > the type that attempts to explain the mathematics.  Ask on
 > sci.math.num-analysis.

Indeed, statistical error analysis leads to overestimates of the error
to the point where it makes no sense to do the calculation at all.
With such an analysis, solutions of linear systems with 40 unknowns
or more are too unreliable.  Also numerical mathematics in general
uses "black-box" analysis, i.e. it is assumed that the input is exact
and the results are based on that assumption.  And finally, statistical
error analysis tells you nothing about the additional errors you get
due to the finite precision operations you are performing.  To re-quote:
 > >                           The first answer is that your computer
 > >arithmetic should be accurate enough such that experimental error (in
 > >the input data) is larger than the computational error.
Numerical error analysis will tell you whether that is the case or not.
You can *not* state: my experimental error is in the 5th digit, so when
I perform the arithmetic in 15 digits, that will be sufficient.  There
are cases where it is *not* sufficient.

One final point, in numerical error analysis you can go two ways.
Forward and backward.  To illustrate, suppose you have a set of
input data I and the algorithm results in a set of output data O.
In forward analysis you assume exact I and find that the computed
O is in some bound of the exact O (with some way to measure).  With
backward analysis you show that the computed O is the exact answer
for some I within some bound of the exact I.  It is this letter form
that made numerical algebra on large systems possible.
--
dik t. winter, cwi, kruislaan 413, 1098 sj  amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn  amsterdam, nederland; http://www.cwi.nl/~dik/

 
 
 

Help with some basic questions about Log[] and x^y

Post by Terr » Wed, 10 Dec 2003 12:14:56


Wow, thanks for the replies so far!

Actually Terje had it right, I'm only looking for error analysis in
the context of machine arithmetic. In other words I'm only interested
in the errors that are introduced by my computer, all input numbers
are assumed to be perfect once generated within the computer (you
might have guessed this is a simulation of sorts and all data are
contrived to meet specific criteria).

So far I've gone through "What Every Scientist Should Know About
Floating-Point Arithmetic" (http://docs.sun.com/db/doc/800-7895) and
I'm going through Peter Tangs "Table-driven implementation of the
logarithm function in IEEE floating-point arithmetic" (albeit slowly)
but no where am I able to find the absolute worst errors that can
result from x^y calculations. Peter's article does specify an error
for the Log function but I don't know where his algorithm is
implemented. Ideally if I could just find a survey paper that talks
about which algorithm is implemented in which hardware, that would go
a long ways towards answering my questions.

All the other 'how to implement powering function' papers I'm able to
find don't seem to give an error analysis (maybe I'm not looking hard
enough?)

Peter does mention in his "Table-driven implementation of the Expm1
function in IEEE floating-point arithmetic" article that he's planning
on writing another paper on the POW() function but I have yet to find
such a paper.

As always any and all responses are much appreciated!

terry

P.S. I did find a paper also by Peter talking about how arithmetic is
implemented in the Itanium 64bit CPU, but the hardware I'm using is
all pre-P5 :(

 
 
 

Help with some basic questions about Log[] and x^y

Post by Nick Maclar » Wed, 10 Dec 2003 17:20:53




|>
|> Indeed, statistical error analysis leads to overestimates of the error
|> to the point where it makes no sense to do the calculation at all.

Grrk.  That is an oversimplification.  It can do the converse.
Consider the powering example!

|> With such an analysis, solutions of linear systems with 40 unknowns
|> or more are too unreliable.  Also numerical mathematics in general
|> uses "black-box" analysis, i.e. it is assumed that the input is exact
|> and the results are based on that assumption.

That is shoddy - get better books!  Wilkinson and Reinsch doesn't.

|> And finally, statistical
|> error analysis tells you nothing about the additional errors you get
|> due to the finite precision operations you are performing.

It depends on how you use it.  I have used it for that.

Regards,
Nick Maclaren.

 
 
 

Help with some basic questions about Log[] and x^y

Post by Nick Maclar » Wed, 10 Dec 2003 17:26:46



|>
|> All the other 'how to implement powering function' papers I'm able to
|> find don't seem to give an error analysis (maybe I'm not looking hard
|> enough?)

Your problem is that such analyses date from the days when we
"Just Did It" - i.e. before computer scientists got involved
(or even existed).  So there are few coherent descriptions, as
such examples were regarded as not worth writing up.

The analysis might be tricky, but the principles are trivial.
Logically, what you do is to work out the most each calculation
could increase the error on its input, and work through.  For
most forms of powering, it is straightforward.

Regards,
Nick Maclaren.

 
 
 

Help with some basic questions about Log[] and x^y

Post by glen herrmannsfeld » Wed, 10 Dec 2003 18:56:25


(snip)

Quote:>>>Finally, is there a good introductory text on error analysis
>>>specifically focusing on powering and logs suitable for a math
>>>hobbyist such as myself? Or is that a stupid question?
>>The books I know have titles like "Statistical Treatment of Experimental
>>Data."   Mostly the idea is that all measured quantities have an
>>uncertainty to them, and you need to take that into account when working
>>with such data.
>>Now, you ask a different question, related to the effect of computer
>>arithmetic on such data.   The first answer is that your computer
>>arithmetic should be accurate enough such that experimental error (in
>>the input data) is larger than the computational error.
> I think that he is asking about a different sort of error analysis.
> I.e. the sort that if done (for linear algebra) in Wilkinson and
> Reinsch.  He needs to find a good numerical analysis textbook, of
> the type that attempts to explain the mathematics.  Ask on
> sci.math.num-analysis.

Well, it isn't so different.  In a computation with multiple steps,
the effect of the error generated on previous steps needs to be
taken into account.

Quote:>>Consider some basic operations on data with uncertainties:
>>(a +/- da) + (b +/- db) = (a + b) +/- sqrt( da**2 + db**2 )
>>assuming that da and db are statistically independent, in other words,
>>uncorelated, and it also assumes they are gaussian distributed
> Actually, no, it doesn't.  That formula applies to ANY independent
> distributions with variances - and the distributions don't even have
> to be the same.  Not all distributions have variances, of course.

Well, the independent is the important part.  I had thought that
there were some distributions that it didn't apply to, but yes.

Quote:>>(a +/- da) * (b +/- db) = (a*b) +/- sqrt( (a*db)**2 + (b*da)**2)
>>For any function f(x),  f(x +/- dx) = f(x) +/- dx * f'(x)
>>. . .
> Approximately, and only when da and db are small relative to |a| and
> |b|.

When you get to the point where that isn't true you are in bad shape.
Though with some algorithms it isn't hard to do.

-- glen

 
 
 

Help with some basic questions about Log[] and x^y

Post by Nick Maclar » Wed, 10 Dec 2003 19:12:50



|>
|> > I think that he is asking about a different sort of error analysis.
|> > I.e. the sort that if done (for linear algebra) in Wilkinson and
|> > Reinsch.  He needs to find a good numerical analysis textbook, of
|> > the type that attempts to explain the mathematics.  Ask on
|> > sci.math.num-analysis.
|>
|> Well, it isn't so different.  In a computation with multiple steps,
|> the effect of the error generated on previous steps needs to be
|> taken into account.

It is fairly different for things like powering, where the errors
are definitely NOT independent.  Statistics doesn't gain a lot with
that one.

|> >>Consider some basic operations on data with uncertainties:
|>
|> >>(a +/- da) + (b +/- db) = (a + b) +/- sqrt( da**2 + db**2 )
|>
|> >>assuming that da and db are statistically independent, in other words,
|> >>uncorelated, and it also assumes they are gaussian distributed
|>
|> > Actually, no, it doesn't.  That formula applies to ANY independent
|> > distributions with variances - and the distributions don't even have
|> > to be the same.  Not all distributions have variances, of course.
|>
|> Well, the independent is the important part.  I had thought that
|> there were some distributions that it didn't apply to, but yes.

There are, but they are the ones without variances.  Such as Cauchy
(a.k.a. Student's t with one degree of freedom).

|> >>(a +/- da) * (b +/- db) = (a*b) +/- sqrt( (a*db)**2 + (b*da)**2)
|>
|> >>For any function f(x),  f(x +/- dx) = f(x) +/- dx * f'(x)
|>
|> >>. . .
|>
|> > Approximately, and only when da and db are small relative to |a| and
|> > |b|.
|>
|> When you get to the point where that isn't true you are in bad shape.
|> Though with some algorithms it isn't hard to do.

You can get there even with simple multiplication where the error
is comparable to the mean.  That is a notoriously hairy area.

Regards,
Nick Maclaren.

 
 
 

Help with some basic questions about Log[] and x^y

Post by Dik T. Winte » Wed, 10 Dec 2003 22:45:23


 >


 > |>
 > |> Indeed, statistical error analysis leads to overestimates of the error
 > |> to the point where it makes no sense to do the calculation at all.
 >
 > Grrk.  That is an oversimplification.  It can do the converse.
 > Consider the powering example!

Yes, indeed, it is an oversimplification.  "leads" should have been
"can lead".

 >
 > |> With such an analysis, solutions of linear systems with 40 unknowns
 > |> or more are too unreliable.  Also numerical mathematics in general
 > |> uses "black-box" analysis, i.e. it is assumed that the input is exact
 > |> and the results are based on that assumption.
 >
 > That is shoddy - get better books!  Wilkinson and Reinsch doesn't.

Wilkinson's The Algebraic Eigenvalue Problem good enough?  Even
backwards error analysis is in terms of exact input.

 >
 > |> And finally, statistical
 > |> error analysis tells you nothing about the additional errors you get
 > |> due to the finite precision operations you are performing.
 >
 > It depends on how you use it.  I have used it for that.

Of course, you *can* do it, but you have to know pretty well what you
are doing.  You can also subtract nearly equal values (that have errors
in them) when you know exactly what you are doing.  (I once did it in
a routine for the calculation of the arcsin.  The high relative error
that was the result was small in the final result, due to other things.)
--
dik t. winter, cwi, kruislaan 413, 1098 sj  amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn  amsterdam, nederland; http://www.cwi.nl/~dik/

 
 
 

Help with some basic questions about Log[] and x^y

Post by Fred J. Tydema » Thu, 11 Dec 2003 02:29:17



> So here are some of the questions I'm struggling with:
> - Given that I'm currently only using WIntel hardware, is there a
> * transcendental and powering algorithm in use?

Based upon my testing of the math libraries shipped with C compilers,
NO.

Quote:> - Is there a paper that describes these algorithms along with an error
> analysis?
> - If not in the above paper, where might I find out what the absolute
> worst error is for a Log[x] and x^y operation where both x and y are
> floating point numbers?

Please look at the program tsin.c on the public FTP site mentioned
below in my signature.  In the comments section at the top is a
table of errors on sin(355.0) for many compilers.  That should give
you an idea on which compiler and library vendors care about good
(nearly correct) results, and which don't know (or don't care) how
bad their math libraries are.  Many Intel x87 based libraries use
the fsin hardware instruction, which for the machines I have results
for, gets 7.09 bits wrong, or 135 ULP error.

I just did a scan of my ULP errors of log() and pow().
For log(), most are 1.0 or better.  But the worst is 6.1e12 ULPs.
For pow(), some may be 1.0 ULP (after excluding gross error cases),
but most are in the thousands of ULPs, with the worst being 4.2e18 ULPs.

---
Fred J. Tydeman        Tydeman Consulting

+1 (775) 287-5904      Vice-chair of J11 (ANSI "C")
Sample C99+FPCE tests: ftp://jump.net/pub/tybor/
Savers sleep well, investors eat well, spenders work forever.

 
 
 

Help with some basic questions about Log[] and x^y

Post by Terje Mathise » Thu, 11 Dec 2003 03:52:31




>>So here are some of the questions I'm struggling with:
>>- Given that I'm currently only using WIntel hardware, is there a
>>* transcendental and powering algorithm in use?

> Based upon my testing of the math libraries shipped with C compilers,
> NO.

>>- Is there a paper that describes these algorithms along with an error
>>analysis?
>>- If not in the above paper, where might I find out what the absolute
>>worst error is for a Log[x] and x^y operation where both x and y are
>>floating point numbers?

> Please look at the program tsin.c on the public FTP site mentioned
> below in my signature.  In the comments section at the top is a
> table of errors on sin(355.0) for many compilers.  That should give
> you an idea on which compiler and library vendors care about good
> (nearly correct) results, and which don't know (or don't care) how
> bad their math libraries are.  Many Intel x87 based libraries use
> the fsin hardware instruction, which for the machines I have results
> for, gets 7.09 bits wrong, or 135 ULP error.

This is a bit harsh imho:

The Pentium FSIN is clearly documented (at least in Peter Tang's papers)
to be used on arguments that have been reduced to the +/- pi range.

Due to the 80-bit extended format, it will stay within 0.5 for double
arguments of a somewhat larger range, but using on an exact integer
value that just happens to be _very_ close to N * pi.

Extending the x86 library to use a simple approximate range test up
front, and then a reduction if needed, would add very little to the
average runtime, i.e. something like this:

double sin(double x)
{
   double n;

   n = round(x * one_over_pi);

   if (abs(n) > RANGE_LIMIT) {
     /* Use an extended precision value for pi, defined via an
        array of float values. This keeps each operation exact. */
     x -= n * pi_array[];
   }
   return FSIN(x);

Quote:}

The main problem here is that I really don't like inputting large (i.e.
out-of-range) values to functions like sin(), and then pretend that said
values are exact. :-(

Terje

--

"almost all programming can be viewed as an exercise in caching"

 
 
 

Help with some basic questions about Log[] and x^y

Post by Fred J. Tydema » Thu, 11 Dec 2003 13:58:55



> > Please look at the program tsin.c on the public FTP site mentioned
> > below in my signature.  In the comments section at the top is a
> > table of errors on sin(355.0) for many compilers.  That should give
> > you an idea on which compiler and library vendors care about good
> > (nearly correct) results, and which don't know (or don't care) how
> > bad their math libraries are.  Many Intel x87 based libraries use
> > the fsin hardware instruction, which for the machines I have results
> > for, gets 7.09 bits wrong, or 135 ULP error.

> This is a bit harsh imho:

> The Pentium FSIN is clearly documented (at least in Peter Tang's papers)
> to be used on arguments that have been reduced to the +/- pi range.

Here are the results I get from testing an Intel Pentium 4.
In the following, FUT means Function Under Test.

Using 80-bit long doubles gets:
 Test vector 128: FUT not close enough: +3.440707728423067690000e+17 ulp
error
 Input arg=4000c90fdaa22168c234=+3.141592653589793238300e+00
 Expected =3fc0c4c6628b80dc1cd1=+1.666748583704175665640e-19
 Computed =3fc0c000000000000000=+1.626303258728256651010e-19

 Test vector 129: FUT not close enough: +1.376283091369227077000e+18 ulp
error
 Input arg=4000c90fdaa22168c235=+3.141592653589793238510e+00
 Expected =bfbeece675d1fc8f8cbb=-5.016557612668332023450e-20
 Computed =bfbf8000000000000000=-5.421010862427522170040e-20

Using 64-bit doubles gets:
 Test vector 64: FUT not close enough: +1.64065729543000000e+11 ulp
error
 Input arg=400921fb54442d18=+3.14159265358979312e+00
 Expected =3ca1a62633145c07=+1.22464679914735321e-16
 Computed =3ca1a60000000000=+1.22460635382237726e-16

 Test vector 65: FUT not close enough: +8.20328647710000000e+10 ulp
error
 Input arg=400921fb54442d19=+3.14159265358979356e+00
 Expected =bcb72cece675d1fd=-3.21624529935327320e-16
 Computed =bcb72d0000000000=-3.21628574467824890e-16

Here is the Intel documentation on accuracy of FSIN:

IA-32 Intel? Architecture Software Developer's Manual
Volume 1: Basic Architecture, Order Number 245470-006

PROGRAMMING WITH THE X87 FPU

8.3.10. Transcendental Instruction Accuracy

New transcendental instruction algorithms were incorporated into the
IA-32 architecture beginning with the Pentium processors. These new
algorithms (used in transcendental instructions (FSIN, FCOS, FSINCOS,
FPTAN, FPATAN, F2XM1, FYL2X, and FYL2XP1) allow a higher level of
accuracy than was possible in earlier IA-32 processors and x87 math
coprocessors. The accuracy of these instructions is measured in terms
of units in the last place (ulp). For a given argument x, let f(x) and
F(x) be the correct and computed (approximate) function values,
respectively.  The error in ulps is defined to be:
 ... formula would not cut and paste from PDF file ...

With the Pentium and later IA-32 processors, the worst case error on
transcendental functions is less than 1 ulp when rounding to the
nearest (even) and less than 1.5 ulps when rounding in other
modes. The functions are guaranteed to be monotonic, with respect to
the input operands, throughout the domain supported by the
instruction.

The instructions FYL2X and FYL2XP1 are two operand instructions and
are guaranteed to be within 1 ulp only when y equals 1. When y is not
equal to 1, the maximum ulp error is always within 1.35 ulps in round
to nearest mode. (For the two operand functions, monotonicity was
proved by holding one of the operands constant.)

The only reference I can find to the domain of FSIN is:

8.1.2.2 Condition Code Flags

The FPTAN, FSIN, FCOS, and FSINCOS instructions set the C2 flag to 1
to indicate that the source operand is beyond the allowable range of
2**63 and clear the C2 flag if the source operand is within the
allowable range.

Quote:> Due to the 80-bit extended format, it will stay within 0.5 for double
> arguments of a somewhat larger range, but using on an exact integer
> value that just happens to be _very_ close to N * pi.

The results I am getting do not look like 0.5 ULP for doubles near pi.

The only FPU I know (from personally testing) that gets around 0.5 ULP
accuracy for the full input domain of -2**63 to +2**63 for FSIN, is the
AMD K5, done in 1995, designed by Tom *.  I believe that it takes
around 190 bits of pi to do correct argument reduction for those values.
---
Fred J. Tydeman        Tydeman Consulting

+1 (775) 287-5904      Vice-chair of J11 (ANSI "C")
Sample C99+FPCE tests: ftp://jump.net/pub/tybor/
Savers sleep well, investors eat well, spenders work forever.

 
 
 

Help with some basic questions about Log[] and x^y

Post by Terje Mathise » Thu, 11 Dec 2003 17:07:13




>>Due to the 80-bit extended format, it will stay within 0.5 for double
>>arguments of a somewhat larger range, but using on an exact integer
>>value that just happens to be _very_ close to N * pi.

> The results I am getting do not look like 0.5 ULP for doubles near pi.

I agree, they don't. 'doubles near pi' _are_ convered by my N*pi
argument though, allowing N == 1. :-)

Quote:

> The only FPU I know (from personally testing) that gets around 0.5 ULP
> accuracy for the full input domain of -2**63 to +2**63 for FSIN, is the
> AMD K5, done in 1995, designed by Tom *.  I believe that it takes
> around 190 bits of pi to do correct argument reduction for those values.

Last I checked, Pentium use an 83-bit (67-bit mantissa) version of pi
for argument reduction, so this also becomes the effective limit as soon
as you have to do any kind of argument reduction.

I've read about at least one implementation that have a 1024-bit pi just
to allow exact reduction for the entire +/- 10-bit exponent range of
double precision inputs. Sun?

I think we'll have to agree to disagree about the actual validity of
such calculations.

I.e. if I get an angle (by measurement or some other method) which just
happens to be within a small fraction of a degree from +/- pi, I'd
suspect that maybe the real angle should have been exactly pi.

Still, as I wrote yesterday, it is OK for a sw lib to care about these
things, I still wouldn't saddle a hw implementation with such requirements.

Terje
--

"almost all programming can be viewed as an exercise in caching"

 
 
 

1. "Logging" or "Log structured" file systems with news spool.

(I've included comp.arch.storage and comp.databases in hopes of getting
info from people who know about large file systems with lots of small
files).

We're putting together a new news machine since our old one, which was
a large general server is going to be retired in a few months.  Instead
of burdening our newer servers with news, we want to set up a
workstation (Sparc 5 w/Solaris 2.4) dedicated to news.

I figure that anything I can do to speed up disk performance in the
news spool disk is worth a fair amount of effort.  News is pretty close
to a worst case scenario for disk performance due to the small files
(3.5KB average, 2KB typical) and the fact that news requires tons of
random seeking.  I'd like to plan this system so that it can easily
handle the news load for several years as well as support about 100
readers.  Right now my news spool contains about three-hundred-
thousand files and I expire news after 5 days.

I plan to move to the INN transport software with this new news server.

I've been researching things and I believe that using a log structured
file system (provided by Veritas Volume Manager) can probably buy me
some extra performance, particularly during news reception (lots of
small writes).  Is this correct?  I believe Sun's Online: Disksuite 3.0
provides a logging UFS.  Might this increase performance if I decide
not to use Veritas for some reason?

Also, we'll be using multiple 1GB disks.  I suspect that using multiple
disks with large stripe (interlace) sizes will also be a win since
individual articles end up on one disk or another but not spread over
each except for the relatively rare large article.  I'm hoping this
will reduce the individual seeking that has to be done on each disk.
Even if it doesn't buy me any performance (though I hope it will) I'm
hoping this will at least reduce wear on the disks by distributing the
workload.

I'd appreciate any comments or advice on this subject.

--Bill Davidson

2. Visual C++ problem

3. algorithm for log or natural log

4. SSH and redhat

5. Temp & Press Logging Help

6. Booting Solaris 7: 64-bit kernel (default) vs 32-bit kernel

7. Basic RAID questions

8. HP LaserJet 5L fuser

9. Basic tape drive question.

10. LUN concept, basic question

11. Basic Hard drive questions

12. The comp.periphs.SCSI FAQ (Re: Basic Questions)

13. a basic question