Beyond Semiconductors

January 1996 Dr. Dobb's Developer Update

by Michael S. Malone

Michael is the author of The Microprocessor: A Biography (Springer-Verlag/Telos, 1995, ISBN 0-387-94342-0), from which this article was adapted.
Innovation will advance semiconductor and microprocessor technology through several more generations of new products. But the limitations of physics await, and have already begun to haunt the theorists of the semiconductor revolution. To overcome these limitations, university and corporate laboratories are experimenting with alternatives to silicon and semiconductors. There is gallium arsenide, but in all likelihood, it will encounter the same physical limitations as silicon, though perhaps a generation or two later.

Why will this happen? Writing in Scientific American in June 1993, Robert W. Keyes of IBM listed some of the things that may go wrong:

Let's look closely at some of the dangers Keyes describes. At the molecular level, strange things begin to happen to chips, things that can only be explained by quantum mechanics. For example, Keyes predicts that within a decade transistors in MOS circuits will sit so close together (300 angstroms) that electrons tunneling their way through the insulating material will become a problem. Furthermore, as electron pathways shrink, the electric fields required to block their passage (when the transistor is switched to the off/zero mode) must get stronger. This ever-higher voltage will throw off enough heat to melt the chip unless it is cryogenically cooled--an expensive and unpopular solution. And even if you cool it, the electrons may still pick up enough energy from the field to set off a chain reaction that shorts out the chip--a scenario that occurs at 500,000 volts/cm, which is not far from the current field operating level of 400,000 volts/cm.

None of this may matter much, Keyes argues, because it may not be possible to build these devices. Chip companies have had to move further and further out on the light spectrum to find wavelengths short enough to produce the features needed on modern integrated circuits. Each movement outward, from visible light to ultraviolet to deep ultraviolet to X-rays, becomes more difficult and expensive to produce and control. Manufacturing devices with this degree of miniaturization will require laboratories so clean--essentially vacuums--that they will be wildly cost-prohibitive. And, finally, even if it is possible to build such super-integrated microprocessors, they will have so many features--a billion transistors, say--that producing a defect-free chip may be impossible.

When will all this occur? An important speech given in 1989 by James Meindl, provost of Renssalaer Polytechnic Institute, gives a clue. Meindl proposed a measurement called the Chip Performance Index (CPI) that combined key performance features of semiconductor components. For MOS transistors, he predicted the theoretical limit of integration was an increase of 1019 times over 1960 levels. His estimate of the current CPI was 1013--suggesting that the CPI factor for MOS semiconductor was down to an increase of just one million times. Expecting the miniaturization process to slow as obstacles increased, Meindl suggested the limit of silicon would be reached in the second or third decade of the 21st century.

In his article "Farewell to Chips?" (Byte, January 1990), Bob Ryan said:

For at least the next ten years, you can expect MOS [transistor] technology to provide ever-improving price/performance ratios. The rate of increase will not be as great as in the past, but it will be enough to handle just about any problem you can devise. By the year 2000, one or more of the newer technologies will emerge as a practical alternative to MOS [microprocessor technology].

Sounds simple. But there is always the possibility that no practical alternative to silicon will be found and the semiconductor revolution will end.

Expressing just that concern, in 1992, Don Lindsay of Carnegie Mellon University took a look at Meindl's findings and asked if there was any alternative to silicon that offered a way around this technical barrier. Surveying the industry, he found enough promising technical options to make him optimistic. For instance, an electron's ability to move through a material is defined as that substance's "drift velocity," and that in turn is based on the material's "mean free path", the distance an electron can go before it bumps into an atom ("The Limits of Chip Technology," Microprocessor Report, January 25, 1993).

If you make a circuit smaller than one mean free path, the device can suddenly make a huge performance leap--conceivably into the range of one-picosecond (a trillionth of a second) gate speeds. In Lindsay's words, performance "goes ballistic." The problem with silicon is that, at room temperature, you need surface features smaller than 0.1 micron to achieve these speeds. That will be very hard to do. But gallium arsenide "goes ballistic" with less miniaturization. Other interesting materials, such as silicon carbide and even diamond, do the same thing and have the added advantage of superior heat dissipation. Designers might choose a "heterostructure" that sandwiches different materials in such a way that the electrons can jump up into a higher velocity material for transport, then hop back down into the circuit.

Another way to get a doubling of performance, Lindsay found, is to cryogenically cool the circuits or their interconnections. With the new "high-temperature" superconducting materials, one no longer needs liquid helium to cool the circuit to nearly absolute zero, but only liquid nitrogen to cool the device to -50 degrees C. Though you would never carry such a cryogenic refrigerator around with a pocket calculator, it could still be very practical in a networked computer. With inventions such as the free-piston Stirling engine (for which Intel holds the patent), liquid nitrogen coolers for a chip could be reduced to the size of a two-quart milk carton, says Lindsay, making them eminently practical for many applications.

None of these alternative materials will be as easy to work with as good old silicon, but as processed-wafer values climb into the millions of dollars and silicon grows unworkable, they will no doubt look increasingly appealing.

Another approach to the problem is to replace the electrical signals on the chip with beams of light. This would circumvent many of the power and stray-charge problems associated with electrons. However, going optical poses some real design challenges. For example, the current interconnections for electricity, essentially miniature flat-wire grids on the chip surface, would have to be replaced by hollow-tube optical waveguides. Optical circuitry also requires a number of new architectural features, such as emitters, detectors, and modulators, that would have to migrate onto the processor surface.

One solution has been developed by researchers at Georgia Tech. This approach uses "chiplet" technology to grow gallium arsenide devices onto the surface of a silicon chip to create a hybrid of optical and electronic technologies dubbed an "optoelectronic integrated circuit" (OEIC). The appeal of this OEIC technology is that it offers an intermediate step, one that can still use silicon-wafer fab techniques, to save on cost and keep yield rates high.

Optical chips, for all of their advantages, still face some of the same problems that threaten silicon fabrication, notably the need to create the small feature sizes. What's more, the optical signals in the chip must be converted to electrical signals that in turn create an optical response. Needless to say, during that translation, many of the old electrical problems will reappear.

Yet another possible solution is to go back to first principles and rethink the idea of the gate itself. Are there other ways to perform the same function? And what would these technologies be like?

One possibility that has been around for several decades is the Josephson junction (Jj) described by Janet Barron in "Chips for the Nineties and Beyond," Byte, November 1990. IBM experimented with Josephson junctions all through the 1970s, only to abandon them in 1983 when it determined that the technology's commercialization would take too long and be too expensive. Ten years from now that may no longer be true.

Josephson-junction technology replaces the gate with a switch, formed by placing an insulator between layers of a superconductor that has been cryogenically cooled, usually with liquid helium. Jj switches are very fast (1 nanosecond per instruction) and use very little energy (just a few milliwatts). The only problem is that liquid helium is required for the cooling, which is not only expensive, but impractical. Nevertheless, for specialty computers with exotic and critical applications, Jj-based computation is already appealing. What makes Jj technology suddenly very exciting is the discovery in the last few years of high-temperature superconductors. While none of these materials yet operate at room temperature, they are getting closer, and each improvement means a commensurate improvement in the economic viability of Josephson junctions.

Even more amazing are Quantum switches. Here, we enter the world of nanoelectronics, designing electronic devices at the atomic level. In this world, the rules are different, and scientists can take advantage of "quantum effects" in devising new types of circuits.

Very small silicon transistors suffer from electrical-wave interference, but quantum switches use that interference to control the passage of electrons through ring-like structures of gallium arsenide that are only a few atoms thick, and are created by epitaxy. These switches are connected by "wires," very pure optical-fiber-like conduits or waveguides, also laid down upon the chip surface by an epitaxial layer only a couple of atoms thick. To designers, these features, the ring and the wire, are respectively two- and one-dimensional quantum structures. Even more exotic is the research into zero-dimension quantum structures. "Quantum dots," as they are called, can be packed far more densely than transistors, and best of all, might interact with one another directly without interconnections.

Quantum switches offer other advantages besides size. In silicon transistors, trillions of electrons shoot through each time the gate is opened or closed, but quantum switches can choke down to open and close on one electron at a time--meaning billions of times per second. This offers the potential for computers to blast along at millions of times the speed of the most-powerful current machines. Nor would quantum-switch computers be limited to binary: They could allow multiple states, making them even more powerful and adaptive. That's why scientists are excited about the long-term potential of quantum switches.

Here's why they aren't optimistic about the short-term potential: Quantum switches only work well at about -450 degrees F; that is, like the Josephson junction, near absolute zero. At room temperature, the quantum switch is always on. But at a certain level of miniaturization, still a decade or so away, quantum switches will work in normal conditions. Building such miniaturized switches in volume, however, will be a different story.

Finally, and most remarkable of all, is the possibility of replacing silicon semiconductors with organic neural circuits. Bionic nerve chips, made of silicon, have already been used to connect the severed ends of animal nerves, the existing nerve fibers growing up through tiny holes in the chip to touch its electrical interconnects. These bionic-nerve chips restore some nerve activity, and hold the potential to one day reconnect severed or damaged nerve bundles like the spinal cord, or help control artificial limbs as if they were real.

That's just the beginning. Early in 1994, researchers at the University of Maryland announced they had successfully grown rat nerve cells atop a silicon substrate. Bizarre as that may sound, the advantages of organic neurons are considerable. Though they are comparatively large, their multiple interconnects make up for that disadvantage through the extraordinary performance that results from their parallel operation. They also regenerate and thus can overcome flaws and damage from use. And they are identical to animal nerves--meaning they will "think" the same way we do.

Then, in late 1994, the scientific world was amazed to read of an experiment by Dr. Leonard Adleman of the University of Southern California in which he used biological reactions involving strands of DNA as a sort of molecular computer. (See "Molecular Computation of Combinatorial Problems," by Leonard M. Adleman, Science, November 11, 1994 and "Biochemical Techniques Take on Combinatorial Problems," by Peter Pearson, DDJ, August 1995.) It took Adleman six months to come up with the technique, which involved translating data into sequences of letters represented by the chemical units of the DNA. The problem to be solved--the shortest path linking multiple cities--is difficult even for computers because the number of potential paths increases exponentially until, at 100 cities, the problem becomes too great for any modern computer.

Adleman chose seven cities (a challenging enough problem), assigned the cities their DNA "flight numbers," and mixed together all the different strands in a solution no bigger than one-fiftieth of a teaspoon in the bottom of a test tube. The DNA did the rest, combining almost instantaneously into every possible combination, including the right answer. The entire calculation occurred in trillionths of a second, a thousand times faster than the fastest supercomputer, and was stored in a space a trillionth the size of a silicon transistor.

The bioprocessor. A growing, thinking, learning chip, made from the raw stuff of life. It is almost too much to imagine. And what you can imagine is both thrilling and disquieting at the same time. It will certainly be the greatest triumph and the ultimate irony of the microprocessor. Devised to mimic and replace the brain, the microprocessor, the greatest invention of the twentieth century, may in the end outstrip its inventors' greatest fantasies and become a living brain itself.

DDJ