Numerical Precision, Accuracy, and Range

Computers do not process real numbers nor even integers. They process finite subsets of either and, unfortunately, operations on these finite values do not have exactly the same properties that most math classes teach. Generally, the numeric types used are identified primarily by their fixed precision: how many bits hold each value? However, accuracy is generally far more important: how close is the value to the value that would have been obtained using infinite precision? The relationship between accuracy and precision is disturbingly subtle.

For example, given finite-precision floating-pont values, (a+b)+c often yields a very different value from a+(b+c). Minor restructuring of the operation sequence can yield single precision results that are more accurate than those originally obtained using double precision! Even using integers, there are suprizes; for example, averaging two values, a and b, is not as simple as (a+b)/2 nor even (a/2)+(b/2) (here is a way to get the accurate floor of the average). There is an old joke that it is easy for a computer to do arithmetic very fast, so long as the answer doesn't have to be correct... we aren't laughing. Instead, we've been doing a lot toward making accuracy as predictable and controllable as possible with minimal computational overhead:

It is interesting to note that, just over the past year, GPUs have now standardized a new, even lower precision, floating-point format: EXT_packed_float values place 3 unsigned floating-point numbers in each 32-bit object. The RGB encoding uses a 5-bit exponent with a bias of -15. R and G each get a 6-bit mantissa, while B gets only 5 bits. That gives field sizes of 11, 11, and 10 bits. Here slide 16 gives a nice summary.


The Aggregate. The only thing set in stone is our name.