In just a few short months we'll finally know how seriously to take the Y2K problem. I anticipate major headaches for most of us, but few outright disasters. On the comp.software.year-2000 newsgroup, this view would get me labeled as a "Pollie," short for Pollyanna. For reasons which aren't clear to me, the Pollyannas on this newsgroup seem to receive a lot of flaming and abuse from the doomsayers. Maybe it ruins some folks' sense of fun to suggest that nuclear plants will not explode, the power grid will not go down, and we will not be reverting to a primitive civilization, a la Road Warrior, where people with crossbow skills are suddenly in high demand. Be that as it may, I doubt that many embedded devices will fail due to bad date calculations such calculations just aren't essential to their operation.
That doesn't get us totally out of the woods, however. Some devices may include date calculations anyway, for human interface and logging purposes. Depending on the architecture and the carelessness of the programmer, an error in a date computation can still bring the entire system to a halt. This is indeed a different class of bug. It's the kind that says, "if I'm going down, I'm taking you with me." We have already seen this kind of bug in non-Y2K contexts, the most notable being the Ariane 5 missile disaster of a few years back (see http://www.esrin.esa.it/htdocs/tidc/Press/Press96/ariane5rep.html). And we're seeing Y2K versions of this bug as well. See http://www.iee.org.uk/2000risk/Casebook/eg-07.htm for a particularly disturbing example involving a petrochemical plant. Yes, a Y2K bug can create havoc in an embedded system, even if it occurs in a non-critical computation.
I do think the likelihood of such havoc is pretty small. The example above comes from a set of case studies maintained by the British IEE (The Institute of Electrical Engineers, not to be confused with the IEEE). Most, but not all, of the case studies I've seen list the consequences as "cosmetic." Still, we'd be foolish to ignore the lessons Y2K is trying to teach us. The first lesson is, be proactive about handling errors; don't just sit by and wait for them to handle you. The second lesson would be, either partition essential computations from non-essential computations, or don't allow non-essential computations to run at all.
These lessons are especially important considering the latest push toward development of home networks. As some would tell it, we're going to put CPUs in every conceivable consumer appliance, from can openers to washing machines, and hook them together in a network. Then, of course, we'll hook that network to the Internet, so we can check up on what our toaster is doing while we're at work. The benefits of all this connectedness remain to be seen, but some of the risks are obvious. More connections give an ordinary device more excuses to fail. I guess I don't mind if my coffeemaker wants to talk to my refrigerator, but if it refuses to brew with a 404 Not Found' message it will quickly become not found' around my kitchen. I am being a little facetious, but I am also a little bit concerned. Networked devices must be capable of failing by degrees, depending on which nodes can be reached; but failure by degrees is largely unknown in consumer-oriented products.
When the day comes that we've finally succeeded in hooking everything to everything else, I sure hope we'll have learned the lessons of Y2K. If not, we may have trouble on our hands that makes January 1, 2000 look like a great big party.
Marc Briand
Editor-in-Chief