Attacks and Accidents

Dr. Dobb's Journal January 2003

By Michael Swaine

Michael is editor-at-large for DDJ. He can be contacted at mike@swaine.com.

There were incidents and accidents

There were hints and allegations

—Paul Simon

At 5:00 pm Eastern Time on Monday October 21, 2002, seven of the 13 root servers of the Internet's domain name system hierarchy were taken out of action. Another two servers were at the same time suffering severe degradation of performance. The servers were hit simultaneously from many directions by a coordinated, deliberate attack—probably by means of an Internet Control Message Protocol ping flood using any of the widely available tools designed for such system sabotage. It was judged to be the largest and most complex distributed denial of service attack in the Internet's history—so far.

As we have to remind ourselves when we hear about the latest airline disaster, not every crash is an attack: Some crashes are just accidents. Sift through the security-related stories in the queue at your favorite technology news service, and you'll find reports of: 1. Ordinary bugs in programs that can lead to system crashes; 2. Deliberate attacks on systems; and 3. A hybrid category—flaws in programs that expose them to such attacks. Items in that third category count as bugs only in a climate in which every vulnerability is likely to be exploited by the bad guys; in a nicer world they would just be benign sloppiness. But the October incident was neither sloppiness nor an accident, but a deliberate attack on the Internet, something with which we have become far too familiar.

As it turned out, the attack went unnoticed by end users of the Internet. Taking out the root DNS servers is only disruptive if it can be sustained over a stretch of hours; most DNS lookups are run against data cached on ISP servers. A similar attack directed at other targets—I'll refrain from suggesting any—could do a lot more damage. And there is little doubt that another, and this time better planned, attempt to take down the entire Internet will occur; it's only a matter of time. Cheerful thought, eh?

That's why I think that Albert-László Barabasi's book Linked: The New Science of Networks (Perseus Publishing, 2002; ISBN 0-7382-0667-9), comes at a particularly opportune time. Linked deals in some detail with the effect of network topology on a network's response to attacks and accidents.

Before I opened the book, I thought I knew something about the effect of the Internet's topology on its resistance to attack. I had bought into the myth that the Internet was specifically built to survive a nuclear attack. It turns out that this familiar claim is wrong.

How the Net Was Woven

I'd rather be a forest than a street

—Paul Simon

A particularly unhelpful metaphor for the Internet is "information superhighway." The Internet is certainly not like a single highway with on-ramps and off-ramps; that's the wrong topology. Nor is it structured like the entire American highway system, although arguably it's unfortunate that it isn't. But what was the intended topology of the Internet, and how close is what we would end up with to what was intended? Here's Barabasi's brief history, in my even briefer paraphrase:

In 1959, the Department of Defense gave the RAND Corporation the job of designing a communications system capable of surviving the effects of a Soviet nuclear strike on America. (The nuclear attack story does have some basis.) The RAND Corporation handed the job to 30-year-old Paul Baran, and Baran produced a 12-volume report with his analysis of the existing communications infrastructure and its vulnerabilities, and his recommendations for a network that would have a better chance of surviving that feared nuclear strike.

The network topology that Baran recommended was a distributed network. Centralized networks with one hub, or even decentralized networks with a lot of hubs, were critically vulnerable to attacks aimed at those hubs. Baran recommended a democratic, distributed network, more or less with no node more important than any other. Like the American highway system.

But that's not what the Internet looks like today, and it's not the way the Internet was first designed. The DoD largely ignored Baran's advice and what got built was a decentralized network with multiple hubs. Like the American air-traffic network.

According to Barabasi, it wasn't his recommendation of a distributed topology exactly that killed Baran's plan. What it was exactly was AT&T. The phone company saw that Baran's distributed plan would have to be implemented as a digital network, and a digital network would be a new network, competitive with AT&T's analog network. So the nuke-safe Internet would have threatened Ma Bell's monopoly, and the DoD scuttled it. This was in 1959, two years before a departing President Eisenhower would famously warn Americans of the danger of the military-industrial complex and of public policy becoming the captive of defense contractors. To an alarming extent, it seems, it already was.

In 1965, ARPA revisted Baran's preferred network topology in the creation of ARPAnet, a plan for connecting computers at universities and defense-related sites. But the key decisions regarding the underlying communications network had already been made, and the Internet ended up being a decentralized—not a distributed—network.

Since then, the Internet has grown beyond all expectations, becoming a network of networks with other networks layered on top of them, practically impossible to catalog or map.

And its nondistributed topology is directly responsibility for its vulnerability to attacks.

A Choice of Catastrophes

It's just apartment house rules

One man's ceiling is another man's floor

—Paul Simon

Linked sharply distinguishes between attack survivability versus fault tolerance. The two issues are not only distinct, they are apparently complementary goals, in terms of network topology: A network topology that is resistant to accidents will be vulnerable to attacks, and a network topology that is resistant to attacks will be vulnerable to accidents.

Barabasi cites an experiment in which he and his colleagues simulated random router failures. They were looking for the percentage of damage to the Internet that would be required to happen before the remaining nodes no longer held together, and the simulated Internet failed. To their amazement, they found that they had to remove 80 percent of the Net before the remaining nodes stopped functioning as a cohesive network. The Internet is remarkably robust against random failures: A random four-fifths of the Internet can disappear without affecting the smooth functioning of the remaining fifth.

Take out nodes in a nonrandom fashion, however, and it's a different story. Removing a tiny fraction of a percent of nodes, carefully chosen, can bring about a spectacular, massive failure of the Internet.

It's a trade-off, Barabasi says: "The price of topological robustness [against random catastrophes] is extreme exposure to attacks."

The trade-off involves networks with essentially random links versus networks with hubs. With random links, you get the American highway system, node equality, a normal bell curve distribution of the number of links per node, resistance to attacks, and vulnerability to accidents. With hubs, you get the American air transport system, node inequality, a power curve distribution of the number of links per node, resistance to accidents, and vulnerability to attacks.

The distributed network that Paul Baran wanted to see implemented would have been much more resistant to deliberate attacks than the current Internet is, but it would have been much less resistant to the effects of accidents like software glitches, backhoe faux pas, and inclement weather.

A Scale-Free World

A loose affiliation of millionaires

And billionaires

—Paul Simon

The kind of network topology that we have in the Internet is typical of networks that just grow—or, for that matter, that result from forces that you could call the opposite of growth, such as shattering. These networks are called "scale-free" because they exhibit self similarity on different scales. Scale-free networks, and their associated power-law distributions, are everywhere in nature. Barabasi's discussion ranges through such topics as Kevin Bacon, six degrees of separation, and the Hollywood network; economics, monopolies, and Microsoft's market dominance; viruses, fads, and AIDS; biological networks, the resistance of living systems to accidents; and where the Internet is headed.

All the discussion of power laws in Linked sent me to my bookshelves, and Manfred Schroeder's Fractals, Chaos, Power Laws (W.H. Freeman, 1991; ISBN 0-7167-2136-80). Schroeder's list of topics covered is more impressive than Barabasi's: fractals, power laws, pink noise, scale-free systems, brownian motion, the Mandelbrot set, the digits of pi, forbidden symmetries, percolation and forest fires, phase transitions and renormalization, and cellular automata. Schroeder is sometimes so elliptical that it can be hard to follow his points, but he threads together a fascinating network of ideas in a way that reminds one of James Burke's "Connections" television series.

Our Present Dilemma

Everything put together

Sooner or later falls apart

—Paul Simon

But I tend to get tangled up in these nets of ideas and lose track of the practical issues. Here's a practical message from the Barabasi book: "A few well-trained crackers," says Barabasi, "could destroy the Net in thirty minutes from anywhere in the world." The technique is widely understood: Seed a virus in hundreds of thousands of computers, have them launch a coordinated attack to bring down major Internet nodes with a denial-of-service attack, and wait for the failure to cascade through other nodes. The Code Red worm, which infected hundreds of thousands of computers in 2001, is proof of this ability, as is the October 2002 incident. The Internet, in its current topology, cannot survive.

The Quantum Thread

The information's unavailable

to the mortal man

—Paul Simon

The trick to quantum cryptography is the fragility of quantum information. If you look at it, it disappears. Now a startup company is planning to bring a product to market "in the first quarter of 2003" that actually uses the principles of quantum information processing to encrypt data for transmission. You can't tap the transmission and read the message without destroying the message and announcing your presence. This is a genuinely new kind of cryptography. The company, MagiQ Technologies (http://www.magiqtech.com/), says that the technique won't work over the Internet, but only over tightly controlled dedicated optical links. That's going to give them a somewhat smaller market than I imagined on first hearing of the product, but not a negligible market. What's interesting about the product is that if it succeeds, it will be the first commercial application of true quantum information science.

Of course, all modern computing depends on quantum properties of semiconductors. But quantum information science means something more than that. I've written about this emerging field before, and in November 2002, Scientific American devoted a feature article to QIS. The article was written by Michael A. Nielsen, an Australian physicist who has also written, with quantum legend Isaac L. Chuang, the first comprehensive graduate-level textbook on QIS, Quantum Computation and Quantum Information.

QIS is based on Qubits rather than bits, and the distinction is deep. Qubits operate in fundamentally different ways from bits. QIS also deals with quantum entanglement, which I continually grapple with trying to understand as a phenomenon of quantum physics. QIS looks at entanglement not just as a phenomenon, but as a resource, like energy, to use to do work. Specifically, to do quantum information processing.

QIS started to look a lot more practical for real-world information processing in 1995, when Andrew M. Steane and Peter W. Shor independently discovered how to do quantum error correction. This discovery in principle solved the problem of the inherent uncertainty of quantum, its essential probabilistic nature. QIS is looking very real, but has lost none of its weirdness. If I were entering graduate school right now, I'd get into a QIS program.

Just how far QIS could take us is suggested by an article in the October 12, 2002 issue of New Scientist magazine. The cover story of that issue dealt with the work of Grigori Volovik of Helsinki University of Technology and the Landau Institute for Theoretical Physics in Moscow. Volovik maintains that a blob of supercooled helium mirrors the conditions in states of matter and eras in the universe's history that are nearly impossible to study. Due to its quantum properties, a blob of helium can be used to answer many of the most troubling problems of modern physics, he claims. Oxford University Press will publish Volovik's book, Universe in Helium Droplet, some time this year.

A Form of Flattery

Nobody knew from time to time

If the plans were changed

—Paul Simon

It is an aphorism often repeated that imitation is the highest form of flattery. The aphorism needs updating in the age of cloning. The highest form of flattery may be preserving someone's DNA to use in creating a clone of the individual. In the less emotionally charged area of software cloning, there are nevertheless major legal issues, but in one case of a piece of software that has been repeatedly and blatantly copied, it is interesting that the otherwise proprietary intellectual property right holder has never raised any objections to the copying. I refer—yes, again—to Apple's HyperCard.

HyperCard, arguably the best nonprofessional software development tool ever created, was released in 1987. Imitators followed quickly: SuperCard, Plus, and ToolBook were the best-known, but not the only products that closely copied the user environment and the HyperTalk language. Plus was cross platform, allowing users to create applications for both the Mac and Windows platforms. ToolBook was Windows only. SuperCard, though, was a pure Mac play, copying HyperCard and competing with it. SuperCard was also distinct in that its development environment was different from HyperCard's, and it offered things that HyperCard didn't have, like color. It was clearly intended as a more advanced product than HyperCard.

Over the years, I repeatedly lost track of SuperCard development. So did the rest of the world, as SuperCard was passed from one owner to another. But Solutions Etcetera has released SuperCard Version 4. I've been playing with it, and I find little change in the overall feel of the product since the late '80s.

But it is also a fully native Mac OS X application and development environment.

SuperCard was always close enough to HyperCard that any HyperCard user could make the transition. It was also just different enough that few HyperCard users did make that transition. Part of the genius of HyperCard was the seamless connection of the user environment and the development environment. There was, in fact, only one environment. SuperCard enforces a more traditional separation of development and run-time environments, which certainly has advantages. It also put off HyperCard users, who saw no need for the extra layer of complication.

SuperCard has been around almost as long as HyperCard, HyperCard stacks (as HyperCard-created applications are called) can be converted easily to SuperCard format, and SuperCard is closer to HyperCard than anything else out there. It has a claim to being the rightful successor to all that was HyperCard.

Assuming, that is—and here's the rub— that the company behind it sticks around for the long haul. This is not SuperCard's history. Somehow, though, the product has survived, and I think that, somehow, it will continue to survive. Much to Steve Jobs's chagrin, this HyperCard/ SuperCard/whateverCard albatross is not going to go away.

DDJ