Event, Exclusive, Interviews, Supercomputing Frontiers 2015

Error-Free Computing: Unums Save Both Real and Virtual Battles

To many people, the floating pointuniversal number debate is something extraneous: an academic issue that involves computer scientists, engineers, and hardware manufacturers.

But as John Gustafson said during his keynote at the Supercomputing Frontiers 2015 conference on Tuesday, the inaccuracies of floating point estimates have real world implications. They can be deadly both in the real sense  — with missile defense batteries mis-calculating intercept times — or as Gustafson explained they can also lose battles in a virtual sense.

During intense battles in multiplayer games, floating point estimates would give different answers for different players. The calculation of if a players’ shot would be a lethal headshot — or a frustrating miss — would have slightly different answers on different platforms. In order to get reliable, reproducible results in the event of discrepancy the software would need to switch back to integers.

In order to have a better understanding of the benefits of unums, and the challenges of implementing them into hardware, the VR World team spoke with Gustafson on the sidelines of the Supercomputing Frontiers 2015 conference in Singapore to learn more.

VRW-Gustafson-interview

The VR World team interviews Dr. Gustafson

VR World: You mentioned in your keynote that the implementation of Unum is challenging — in the words of one unnamed Intel executive ‘you can’t boil the ocean’. Why is this?

John Gustafson: What he’s saying is that you can’t change the world. All you have is IEEE floats. That’s the standard. ‘You can’t add a new number type, that’s not going to happen’ is what he said.

VRW: How would you categorize the feedback you’ve gotten from CPU vendors about implementing unums?

JG: People at AMD also didn’t get it. That was a kind of different opposition. They just didn’t see that I could save them so much power, electricity and bandwidth. Maybe it just looked too ambitious to them.

I’m not worried about what the hardware people think. I know they are going to hate it. They’ll have to build it, re-design circuits and all of that. I’m  more interested in everyone else.

VRW: What’s the cost of keeping the existing floating point system, versus implementing Unums? What’s the cost of transitioning hardware to support this, versus the cost of errors in everyday life?

JG: Remember: everything you can do with floats you can do with Unums. They are a subset. It’s a choice between one or the other; if it were I think it would never get off the ground. But if you can do everything you can do now if you have Unums, and you can also do other things, you can then incrementally work your way into them.

The other thing is right now we have to deal with at least two, or three, different precisions. Half precision is now out there. Nvidia has got the half precision out there in hardware as a native type, and single precision as well as double precision are everywhere. Quad precision is not supported by anyone’s hardware… I keep watching to see if it’s going to pop up.

But we already have to manage two, or three, different sizes.

I say replace it with one. And the hardware will let that slide continuously from all different sizes. It will simplify things so it may be cheaper and smaller on chip to do it that way then to have a bunch of single precision units and double precision units. That’s the way they do it now. They have to build separate hardware. Which is very wasteful.