My Cousin Corbett dropped by yesterday full of ideas about improving computer math. Primary elections around the country had him thinking about the limitations of the binary two-party system, he told me. "Binary math only seems natural in machines based on the movement of electrons," he explained, "but in a quark-based computer, ternary math would be natural."
Ignoring my objections about silicon design problems, he also brushed off the purely numeric issues. "We could rewrite all the math software there is so that it used trits, or twits, or whatever ternary digits are called. The real problem comes in the logical interpretation of that third twit."
"Doesn't modal logic attempt to solve that problem?" I asked.
"Too presumptive," he replied. "It's not clear yet whether twit three should be regarded as a positive vote for a third party candidate or as a protest vote against all the incumbents."
He was more clear about the virtues of randomness in software design. "The neural net people have something called 'simulated annealing,' based on the familiar principle that any complicated system needs a whack on the side now and then. All kinds of algorithms seem to work better if you introduce a stochastic element, and there's some guy in Georgia who is getting incredible image-compression ratios from fractal algorithms. And you know fractals are close kin to chaos theory."
"Annealing is a technique from metallurgy," I recalled, "involving the heating of metals.
"Heat and randomness are the same thing, of course," he said, adding that he had been looking into the possibility of introducing a kind of beneficent global warming into software generally, through some sort of cybernetic ozone hole. An approach he referred to as Ozoneless Obfuscation of Program Structure would use black-box software components written by strangers to introduce subtle unpredictabilities into programs.
"This is all pretty theoretical," I said. "Are you working on anything more practical?"
"Of course," he answered. "What issue could be more practical than numeric precision? And yet we still don't have systems that deal appropriately with precision concerns."
"I disagree. You can typically specify the precision with which you want to work."
"Ah," he responded, "but no numeric representation or math package lets you attach a degree of importance to precision. Here's what I mean: Every digit of a ten-digit number gets treated with equal importance. But are they equally important? No; for any practical numerical purpose, rounding errors get less important as you move to the right in the number. You can see that the tools we have don't address this need if you consider the transmission of numeric data. Although an error in bit position n of a number is more serious than an error in bit position n+1, no communication package provides error correction that treats the data according to this differential importance."
I objected that he was talking semantics. Ultimately, I argued, importance is a matter of the interpretation the user puts on the data, something you couldn't possibly build into the math routines.
To my surprise, he agreed. "The problem is real enough, but solving it does require some user involvement, to provide that semantic component you referred to."
"It may be a practical problem, but it doesn't sound like you have a practical solution."
"Oh, but I do. I've designed a new computer that lets the user supply that semantic component. I call it the Analogical Engine, and it's intended for engineers, architects, general contractors, and anyone who needs to work with numbers that represent real measurements."
The machine, he explained, is small, cheap, and energy efficient, and operates on principles quite different from today's conventional von Neumann architecture. Even the I/O is distinctive: Measurements are entered by manually positioning sliding bars, allowing the user to control the precision of the input. Results are read off the same sliding bars, and the precision of the output is a direct function of the eyesight of the user. The interface is familiar to anyone who has read a ruler.
The markings on the sliding bars, though, are not spaced linearly as on a ruler, but scaled to logarithmic and cosine and other useful functions. This has the interesting feature of washing out some computational differences. For example, squaring a number takes no longer than adding two numbers. But you do get differences where you ought to get them: computing two digits of precision takes less time and effort than computing four.
Corbett is currently designing a clip-on holder for the machine so that it can be worn on the belt or in a shirt pocket.
Copyright © 1992, Dr. Dobb's Journal