Algorithms


Real-Number Approximation for Real Programmers

Mark Gingrich


Mark currently works on real-time, embedded software in the medical device industry. Although trained in physics and computer science, he occasionally "moonlights" as an amateur astronomer. He can be reached at 355 Estabrook St., Apt. 403, San Leandro, CA 94577.

Introduction

Real-world numerical calculations rarely, if ever, call for floating-point precision to the gazillionth digit. Indeed, do you really need to know your automobile's gas mileage to one-part-per-billion resolution?

Of course, such practicalities influence the design of device software. And living at one computational extreme are those lean, mean microcontroller-embedded machines which often abstain from floating-point code. Yet it is frequently the case that an oddball transducer scaling factor or proportionality constant is required — oddball in refusing to conform to our pristine integer-only universe. Hence the need to link in extra bytes from the floating-point math library into what is, all too often, a nearly full ROM space.

This article describes a simple utility that computes integer-ratio approximations for floating-point constants. Code requiring only a few instances of floating point for scaling may instead use nearly equivalent integer ratios. The result is speed and size benefits by avoiding floating-point routines altogether.

For example, instead of scaling a quantity by, say, 0.66666666, you could multiply by 2, then divide by 3. But this is a trivial case. A much more interesting question is: What integer ratios best approximate those perennially popular (and irrational) constants, such as the square root of 2, or e, or p?

Approximating Pi by the Slice

Let's try p. The first few decimal digits are:

3.14159265358979323846...
Chopping after the first two digits suggests the crude integer-ratio approximation 31/10. Reality differs by little more than one percent in this instance.

But maybe that's not accurate enough. So tack on another digit. p then approximates to 314/100 (157/50 when reduced to lowest terms), just 0.05 percent from the truth. Continuing in this fashion for four more iterations yields the ratio:

   3141593/1000000
with a mere 0.00001 percent error. Not bad for such a straightforward (I'll call it radix-10) method. Yet p is better approximated without so many digits.

Although not immediately apparent, the ratio 355/113 is superior to

   3141593/1000000
There are two compelling reasons why. First, 355/113 is nearer to p. Second, it has a smaller numerator (355 versus 3141593, the latter not even within 16-bit number range). Here is a crucial computational advantage. When a ratio is applied as a scaling factor, a smaller numerator permits larger multiplicands without overflow hassles.

So 355/113 is better. But is it just a freakish occurrence? Table 1 implies otherwise. It shows a sequence of radix-10 rational approximations for p juxtaposed with a few "hand-picked" ratios (355/113 being one). Each of the latter provides better accuracy with smaller integers — more accuracy per bit. Certainly it's not difficult to devise successively more accurate radix-10 ratios by inspection (31/10, 314/100, 3142/1000,...), but discovering these preferred "hand-picked" ratios is more subtle. There is a way to do so. It employs a mathematical device known as a continued fraction.

Fractions, to be Continued

The continued-fraction form is a real number's alter ego. I'll intuitively illustrate the concept by creating a continued fraction for p. Take p's decimal-digit sequence and break it into a sum of two components — the integer part and the fractional part.

3 + 0.1415926535897932...
Ignoring the fractional part for a moment, I could quit right here and claim that p is equal to 3 — a first-cut estimate. Instead, I'll show a bit more initiative and improve the approximation. Rewrite the fractional part as follows:

Fixating on the denominator (7.06525133059310477...), I note again that I'm dealing with a real number. So break it into a sum of integer and fractional parts:

Like before, I may ignore the fractional component and stop at this point, leaving the result:

Or, yet again, I can improve the approximation. Rewrite the fractional denominator term:

The pattern to this algorithm now should be conspicuous. Break the bottom-most denominator into integer and fractional parts:

Ignoring the fractional component, I see that the approximation becomes:

For good measure, I'll do just one more iteration of this process:

And so on, and so on. These first few approximations should look familiar. They are the "hand-picked" ratios of Table 1. And I could continue generating still better ratios seemingly forever, or stop when I achieve the desired accuracy.

This construct of nested fractions:

is called a simple continued fraction. It's "simple" because each numerator is unity. Given any nonnegative real number, R, the ais result by fiat of Algorithm A (Figure 1) . Thus the ais needed to make p are:

[3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, ...]
which is not as instantly recognizable as 3.14159..., but just as valid.

Experimenting with this algorithm and a judicious assortment of real numbers reveals several notable continued-fraction characteristics:

Algorithm A is simple and intuitive — but flawed in actual practice. The gotcha is the finite floating-point representation of real numbers. After a dozen or so iterations, erroneous ais are produced. The algorithm also can mess up after only a few iterations. The comparison test at the top of the loop often flubs because round-off prevents the fractional result from reaching zero exactly. A further annoyance arises from having to collapse the continued fraction to a simple fraction for use as a scaling factor.

Algorithm B (Figure 2) sidesteps these pitfalls. It is a recurrence method using integers only. First you must decide how precisely — how many digits — you want to specify the real number of interest, then express it as a radix-10 ratio. For example, 3.1416 is 31416/10000. The algorithm starts out by producing a crude estimate, then grinds out additional ratios, p[i]/q[i], each a simple fraction in lowest terms and more accurate than the previous one, until the initial radix-10 ratio is achieved. (Algorithm mavens will note a similarity to Euclid's algorithm. It's not a coincidence. See Knuth, 1981.)

Real-World Usage

Listing 1 details an implementation of Algorithm B in ANSI C. This short utility, called fp2ratio, accepts a floating-point number as a command-line argument and displays the ever-improving ratio approximations in succession. Output for a 9-digit representation of p appears in Figure 3.

But before applying one of these ratios as a scaling factor, a caveat. Keep in mind the round-off correction needed before division by the denominator (denom). Positive values are scaled thusly:

So much for the theory; let's try a practical application. Imagine a relatively noiseless sinusoidal signal digitized to 10-bit precision. The analog-to-digital converter scaling is 1 millivolt per least-significant bit. You wish to display, to three decimal-digit precision, the signal's root-mean-square (RMS) value in millivolts. Here's one strategy. Take the difference between the minimum and maximum signal excursions — the peak-to-peak value — then convert to RMS by scaling:

The peak-to-peak value is guaranteed never to exceed decimal 1,023. (It's a 10-bit result, remember.) So declare it an unsigned 16-bit word. As a guard against 16-bit overflow during scaling, the selected ratio's numerator must not exceed 64. Figure 4 shows fp2ratio's output for the scaling factor

The best ratio honoring our overflow constraint is 35/99. Implementing the round-off correction, the scaling expression to appear in code is:

Conclusion

Another possible application of this approach is the "clean up" of numerical values due to loss of fidelity. Take, for example, the computation of a matrix inverse, where the matrix elements are known rationals. You crunch the numbers. You get your inverse — along with unavoidable round-off and truncation effects. Yes, the answer is very close, but it's not an exact rational result. A bit of post-processing, a la the methods described, may help recover the true rational answer.

Continued fractions, it turns out, are applied well beyond the ratio approximation of real numbers. Functions, too, may be approximated, but with polynomials replacing the integers in the numerators and denominators. This is an alternative to a power-series approximation (Press, 1986).

But as handy as they are, continued fractions haven't achieved household-name status. The author Petr Beckmann has lamented that they are, "part of the 'lost mathematics,' the mathematics now considered too advanced for high school and too elementary for college." (Beckmann 1971)

Real programmers are inveterate collectors of tricks and techniques. Consider adding continued fractions to your algorithm toolbox.

References

Beckmann, Petr. 1971. A History of Pi. New York: Dorset Press.

Knuth, Donald E. 1981. The Art of Computer Programming. Volume 2: Seminumerical Algorithms., 2nd ed. Reading, MA: Addison-Wesley.

Olds, C.D. 1963. Continued Fractions. New York: Random House, New Mathematical Library.

Press, William H., Flannery, Brian P., Teukolsky, Saul A., Vetterling, William T. 1986. Numerical Recipes. Cambridge, England: Cambridge University Press.

Sinnott, Roger W. January, 1989. "Continued Fractions and the Sky". Sky and Telescope. 80-82.

Table 2

Sidebar: "Baseball, Mom, and Rational Pi"