Why So Many Math Functions?


In mathematics, a logarithm is a logarithm and cosine is a cosine. You'd think there would be one right way to write something as precisely defined as a function from mathematics. Yet the computing world abounds in varied implementations of the same set of basic math functions. Whole books have been written on how to implement these dozen or so creatures. What's going on?

The answer lies in the critical differences between the hard science of mathematics and the pragmatic craft of computing. Each implementation has to make tradeoffs among a number of factors. It is the weight given various tradeoffs that characterizes a particular implementation. Here are some of the issues:

Accuracy — Many computers support only add, subtract, multiply, divide, and compare between floating-point operands, plus a few conversions. A math function must use these, and some seminumerical tricks, to approximate a mathematical ideal to a given machine precision. Getting within a few bits is not too hard, but getting the best possible answer all the time can border on the impossible. A math function typically sacrifies a modicum of performance to meet some stated accuracy goal.

Precision — One way to lose accuracy is to lose precision. (The two are not the same.) On a pocket calculator, type in an arbitrary fraction, such as 0.123456789. Now add one billion and subtract it off again. Where did the low-order fraction digits go? A math function must avoid any such calculations that lose intermediate precision, however sensible the mathematical ideal.

Range — Floating-point arithmetic supports a much wider range of values than integer, but it is still finite. An intermediate result can overflow even when the final function value is representable. On some machines, an overflow can terminate program execution. Even underflow is not always best "fixed up" by substituting a zero value and continuing. A math function must avoid any such calculations, or handle them very carefully.

Special codes — IEEE Standard floating-point arithmetic is rife with special codes that ride along with finite numerical values. Besides the pedestrian zero, there is also minus zero (yes!), minus Infinity, plus Infinity, and a whole host of NaN (for "Not a Number") values, both "quiet" and "signaling." A math function must decide how much to trust the underlying hardware and how much special logic to add to deal with these codes, if they might be present.

Portability — Not all modern computers implement IEEE Standard floating-point arithmetic. Even those that do take unfortunate advantage of the latitude within the standard, or they fail to implement it exactly. Beyond IEEE, floating-point arithmetic is notoriously varied among computer architectures. A math function intended to be highly portable must exercise heroic adaptability, and caution.

These and other factors inevitably determine the gross characteristics of a set of math functions — performance, code size, and robustness. Judge the success of an implementation as much on how it achieves its stated goals as on these consequential attributes.

P.J. Plauger