View Single Post
  #55   Report Post  
Posted to microsoft.public.excel,microsoft.public.excel.programming
joeu2004[_2_] joeu2004[_2_] is offline
external usenet poster
 
Posts: 829
Default Excel and the Math Coprocessor for DLLs

Errata.... I wrote:
If Lynn had followed Nicely's implementation
correctly, he wouldn't have had any problem
with the FDIV constants despite the change in
precision mode between Watcom languages and
Excel VBA


But perhaps Nicely's implementation or something nearly identical was not
available when Lynn wrote his(?) algorithm.

The pentbug.c file that I find today has only a 2011 copyright. But it
presumably was written in or before 2003, since that is the date attributed
to the pentbug.zip file on Nicely's website.

If the algorithm had been made public at some earlier date (e.g. 1995), I
would think that Nicely would know that the earlier date(s) of "publication"
should be included in the copyright.


PS.... I wrote:
IMHO, Nicely was wrong to use 64-bit representation
to demonstrate the FDIV bug. Luckily, it worked,
but only because the FDIV error was so large, about
8E-5. He might have overlooked more subtle errors
due to rounding the intermediate 80-bit results to
64-bit representation.


I meant to add.... I also think that it was "wrong" to use (x/y)*y-x to
demonstrate the FDIV bug in the first place.

Well, I do agree that it had "marketing appeal" insofar as it might have
made the defect more relevant to some people.

But my point is.... IMHO, that is not the correct way to test for the
presence of the defect.

Actually, it is Tim Coe who discovered the error with 4195835.0/3145727.0
per se. It represents the largest known error among the many defective FDIV
results.

And it does appear that Nicely or Coe did look at the result of the division
using 80-bit arithmetic, at least eventually.

Nicely's published results of the division are (the comma shows 15
significant digits to the left):

1.33382044913624,10025 (Correct value)
1.33373906890203,75894 (Flawed Pentium)

Note that the representation of the "correct value" is more precise than the
64-bit representation can be. The 64-bit representation is exactly
1.33382044913624,109305771980871213600039482116699 21875, somewhat greater
than 1.333...,10025. The next closest 64-bit representation is exactly
1.33382044913624,087101311488368082791566848754882 8125, which is too low.

Based on the information available today, I would implement the FDIV bug
test as follows.

#include <math.h

double ident(double x) { return x; } // thwart compiler optimizations

int fdivTest() // 1 = okay; 0 = error
{
const double num = 4195835;
const double denom = 3145727;
const double quot = 1.33382044913624;
return (fabs(ident(num)/ident(denom) - quot) < 5e-15);
}

It does not matter whether that is evaluated consistently in 64-bit (53-bit
mantissa) or 80-bit (64-bit mantissa) representation or a mix.

FYI, we cannot expect the division result to have the same 64-bit binary
representation as the constant 1.33382044913624.

In fact, the two closest representations of the constant are
1.33382044913623,998283469518355559557676315307617 1875 and
1.33382044913624,020487930010858690366148948669433 59375.