View Single Post
  #12   Report Post  
Posted to microsoft.public.excel.misc
Jerry W. Lewis Jerry W. Lewis is offline
external usenet poster
 
Posts: 837
Default Improve Excel accuracy over 15 digits

I would like to clarify the false impression that your Patriot Missile
example left. The problem was not binary math per se, but programming that
was not appropriate to the hardware it was intended to run on. Programmers
are expected to know the limitations of the intended hardware and effectively
work around them. This unfortunately did not happen here.

Details of the original incident are summarized at
http://www.ima.umn.edu/~arnold/disasters/patriot.html

The guidance system only performed an external time check when the missile
battery was turned on. It interpreted the current time as the number of
10ths of a second since the system booted, which means that no arithmetic
system would make up for errors in the internal clock. Since knowing the
time was critical to the guidance system, defensive programming would have
periodically verified the time externally. The Dutch Patriot batteries
(using the same flawed software) were much more accurate because they
rebooted frequently, which had the effect of forcing the time to be verified
more often.

Even if the clock was accurate, it is true that the decimal binary
conversion would have an impact in the original algorithm, but the internal
storage format had effectively less than single precision for the
representation of 1/10. Had IEEE double precision (used by Excel) been
employed, the time error (in 10ths of a second) after 100 hours of continuous
operation would have been about 2E-11 instead of 0.3, and thus would have
been completely negligible.

Also, it did not take this tragic miss to uncover the problem, a programming
patch had already been written and shipped; it just arrived one day too late.

Jerry


"Greg Lovern" wrote:

On Aug 27, 7:02 am, Jim Thomlinson <James_Thomlin...@owfg-Re-Move-
This-.com wrote:
Now that every one else has waded in I will ask the question... Why would you
need more than 15 digits of accuracy? With that many digits I can get the
distance from here to the sun to less than a millimeter. Just curious.



Hi Jim,

Most of my customers don't use xlPrecison for numbers that represent
any kind of physical measurement. Instead, they are mostly using
numbers to represent abstract, non-physical measurements. Many of them
are in finance.

Others are doing pure mathematics -- mostly number theory, where as
far as I can tell, the number doesn't represent anything other than
itself and the patterns of digits within it.

Others are doing things so unique I'm not sure how to categorize them.
There may be a few delusional cranks among them, but most of them come
across as highly intelligent and educated; certainly not less so than
the chorus of dogmatists who insist there is no valid use for more
than 15 significant digits of precision.

In some cases it's not only about the precision but also about very
large and very small numbers, far outside Excel's numeric range.

Some of them just want exact results (no binary conversion errors)
without dealing with the issues of Precision As Displayed or Round.
Generally those customers are well aware of what the options are.

BTW, here's an example of how precision can be a matter of life and
death:

During the Gulf War, a binary conversion error led to the deaths of 28
American soldiers (and around 100 injured) on February 25th, 1991 when
an American Patriot missile failed to intercept an Iraqi SCUD missile
headed toward their Army barracks. The error was caused by storing
time in 1/10 second increments, with binary conversion errors on
converting 0.1 to binary. Though each conversion error was tiny, the
error accumulated enough to make the Patriot's navigation software
miss the SCUD, which then reached its target.


Also BTW, I'm curious -- how does 15 significant digits of precision
get you the distance from here to the sun to less than a millimeter?
Isn't the average distance approximately 149,600,000,000,000
millimeters, which would let you get to the *nearest* millimeter in 15
significant digits of precision (or within one millimeter, assuming
truncation rather than rounding), but not to a fraction of a
millimeter? Not that it really matters, of course.


Greg Lovern
http://precisioncalc.com
.