The problems with floating point are not merely headaches for computer engineers, but can have real world consequences. For instance,
- Without the previously mentioned algebraic laws, distributing complex operations among multiple processing cores (paralellism) is impossible, because doing so will significantly build error until the end result is completely unreliable.
- Using floating point operations, Calculus (one of the most useful things computers can help to do) is nearly impossible without a lot of tricks to reduce, minimize and correct error that can completely throw off the results of differentiation.
- In the past, floating point error has resulted in human fatalities, such as the failure in 1991 of an American missile defense system because a floating point error threw off its timing.
- Floating point errors cause numerous, perplexing errors in Microsoft Excel calculations. Since Excel is frequently used by scientists and researchers, this is a serious matter.
- Problems with the floating point unit on an Intel chip caused that chip to be recalled in 1994, due to an error that came to be dubbed the FDIV bug. The culprit was a faulty lookup table used to speed up floating point calculations.
Why does floating point still prevail? There are two answers: first, because floating point has been in use for years, and people are accustomed to using it. And second, because there haven’t been any real, viable alternatives up until this point.
@theovalich @nvidia @IEEEorg @intel #unums are so fantastic that software implementations should be impressing us real soon now. Waiting…
— john danskin (@johnmdanskin) March 17, 2015
Gustafson, who recently spoke to VR World an interview, published a book less than a month ago on the subject of his new format, which he calls the unum (universal number).
The premise of the unum is very simple: a diverse, multitasking number format that can be used to represent a number of any precision with a finite number of bits. The result is an unambiguous format that takes up less space, does not contain rounding errors, and – according to Gustafson – is compatible with traditional algebraic laws, making it usable for differential calculus, parallel computing and unique challenges that could not previously be met by floating point numbers.
In a slide presentation published by Gustafson, unums were used to beat complex problems designed by William Kahan “Father of Floating Point” to stump floating point numbers. The unum seems to win, hands down.
When asked about Gustafson’s new format, influential computer scientist and supercomputer expert Jack Dongarra from the University of Tennessee was cautiously enthusiastic, acknowledging the stagnancy of the industry Gustafson proposes to change. “He has an uphill battle to deal with,” said Dongarra. “We have some 30 years experience with IEEE arithmetic today. We have much longer experience with the ‘standard way of dealing with floating point arithmetic.”
But, Dongarra conceded, “he’s proposing something which I think is terrific. I think innovations like that are needed.” He concluded with a challenge: “I’m not in a position to evaluate the details to see if what’s he has done can be effectively used for scientific computing. One would have to see validation before we would consider going off and building hardware.”
Gustafson says he has a working unum environment, which is included for free with his book. To win his ‘uphill battle’, Gustafson might do well to release this package as open source freeware for examination by professionals, who are in no small need of convincing, and perhaps in far greater need of his solution.
Edit: Dr. Gustafson has, in fact, made the prototype environment that is included with his book available as a free and open source download through CRC Press (under “Downloads / Updates”). The environment can be viewed with Wolfram’s freeware CDF Player, or with regular text editing programs.