Ray Valdes
Ever since I moved to California I've had to accustom myself to sudden changes in the landscape. Here lies, after all, the land of earthquakes and massive forest fires -- natural catastrophes that punctuate what are otherwise glacially slow natural processes. These sometimes tragic catastrophes are not without benefit. Earthquakes make new mountains, forest fires can renew the wilderness.
The computer industry is not without its share of catastrophic discontinuities, those events that clear out the old growth and make space for vibrant new weeds and saplings. It's been 15 years since the "fire in the valley" cleared a space in the old growth of the mainframe and minicomputer industries and made room for such weeds as Apple Computer, Adobe, Autodesk, and Atari (and these are merely those new companies whose names start with the letter "A").
Looking toward the next few years of this industry, certain general predictions are easy to make: more MIPS, more memory, more mass storage, more multimedia, and so on. The precise details are harder to foresee, but (in some sense) who really cares? Except for those people directly connected, it makes little difference to desktop PC users that Northgate, Everex, and Dell are major clone-makers, instead of Cromemco, NorthStar, or Osborne.
And if you stand back far enough, at some locus, the hot points of distinction between Windows 3.0, Presentation Manager, and OSF/Motif fade into a generic gray GUI image. We may as well be using VisiOn. Likewise from a distance, the noisy war between OS/2 and Unix subsides to a steady background noise, like the sound of waves on a distant beach. It may as well be Mach, or DOS 7.0.
The point I'm trying to make is that, as far as desktop PCs are concerned, technology is making unexciting steady progress down a wide evolutionary road that will not have sudde turns and unexpected detours. The path of this mainstream highway is predictable, and its general contour is constrained by the underlying technology and guided by the needs of market. The exact details are left to the vagaries of historical accident, such as the particular lawyer Gary Kildall had on hand when IBM came calling about an operating system for its PC --which resulted in the fact most of our desktop machines say MS-DOS rather than DR-DOS when we boot them up.
In five years, all our desktop machines will have an operating system that multitasks preemptively, exploits 32-bit addressing, and has lightweight threads, virtual memory, and support for networked interoperability. And, from a technologist's point of view, it doesn't much matter what its brand name will be. As with most mature industries, there will be a few major brands -- ABC and NBC, Time and Newsweek, GM and Ford, Republicans and Democrats, OS/2 and Unix -- to give consumers the illusion that they have a choice.
Likewise, users of desktop computers will be served by a direct manipulation interface that has overlapping windows, graphical icons, multibit pixel displays, aural feedback, and cluttered dialog boxes dressed up in a pseudo-3-D look-and-feel. Will users really care if the name on the shrink-wrap box says Presentation Manager, Wheaties, or Cheerios?
In short, the average desktop machine of 1995 will look a lot like Steve Jobs's Next machine, and then some: twin RISC/DSP computing engines, heavyweight pixels, a multimaster data bus, a modern networked operating system, and a post-modern user interface. No one can say for sure, but I doubt the majority of such machines will have the Next brand name on them.
What's more interesting to speculate about are the sudden catastrophic radical discontinuities -- also known as revolutions -- that are as unpredictable as they are inevitable. These new fires in the valley will affect the demise of stagnant, hollowed-out giants like Ashton-Tate and Lotus and enable the growth of new corporate forms heretofore unseen. These radical discontinuities will have a tragic aspect, in that numbers of workers will find themselves looking for new jobs, much like the laid-off employees of Wang who went knocking on the doors of Lotus in the early 1980s. The purpose of this prediction is not to pass judgment or place a stamp of approval on these events, as much as it is to foresee them so that we can be better prepared.
Looking back on the two major revolutions in the computer industry, they were the result of years of steady, evolutionary growth, punctuated by an abrupt jump to a hardware platform based on a fundamentally smaller level of user scale. DEC's minicomputer was the first machine affordable by the small engineering or research group, and marked the first time scientists could work interactively in the same room with their machines. This created a whole industry based on this new platform, which displaced the mainstream mainframe industry (to some extent) and then continued to evolve alongside it.
Likewise, the PC revolution gave us the first machines that we could place on our desktops, or put in the back seat of the old Chevy and take to the Computer Faire to exchange small-scale, garage-grown technology. The nascent PC industry destroyed the manufacturers of dedicated WP machines and displaced (to a certain degree) both mini and mainframe systems, giving us now three strains of mainstream computer technology marching alongside each other.
So when and where will the fourth strain arrive? And when it comes what will it look like?
Predicting the next revolution is a little like predicting the next earthquake, a somewhat dubious endeavor. Nevertheless, certain aspects are inevitable. Like the two previous revolutions, it will involve a hardware platform on a fundamentally smaller level of user scale. Like those revolutions, it will also involve a convergence of enabling software technologies (new operating systems, new tools, new languages, new application methodologies) fulfilling the previously unmet requirements of new groups of users.
You may ask: What about laptop and notebook computers; do they constitute a revolution? No, they are just old wine in smaller bottles. The DEC LSI-11/03 was almost the same size as an Altair or IBM PC, yet in all other respects it belonged to the same strain as the room-size 11/70. Likewise, Compaq's new notebook machine is of the same family tree as its floor-standing SystemPro.
The platform for the next revolution may be the same size as today's notebook computer, but it will be in most other ways a new and different species. It will be notebook size or smaller (that is, armtop or palmtop). It will be controlled by a direct manipulation interface. And it will be what I call "analog accessible." Analog accessible is a fancy term for a closer way of being user-friendly.
DEC's minicomputers were the first machines that the average person could stand beside, type in a request, and get an interactive response. (Prior to this, of course, you had to submit decks to the card reader and wait overnight for a response.) The PC, with its standard memory-mapped display, vastly increased the bandwidth of digital output to the user. But the method of input remained the same: ASCII characters typed at a keyboard.
The new breed of machines will allow for input that more closely resembles the analog world in which we will live. At a minimum, they will replace the digital keyboard with a stylus or pen. This pen will enable more direct manipulation of objects on the display, and eliminate the dichotomy between mouse-on-desk vs. object-on-screen. No cursor will be needed, because what-you-see-is-where-you-are. Merely place the stylus on the desired object and it will respond.
This is nice, but it's not really "analog." What is analog are other methods of input to the machine, namely, handwriting and voice. Transforming continuous pen strokes and analog speech into digital data that can be processed by the machine is a very difficult task. It is likely that early machines will have limited success in handling these new input modes. In fact, we can see these limitations in predecessor machines that are already on the market, like the Sony PalmTop or the one by Grid. But over time, steady progress will result in qualitative change. Remember that the first CP/M machines used standard dumb terminals instead of higher bandwidth interfaces.
Who will produce these new machines? All the usual suspects: IBM, Apple, Compaq, Sony, Toshiba, and so on. Plus a host of smaller companies, whose names have now started appearing in the press: Go Corporation, Active Book Company, Scribe, Momenta, Data Entry, Touchstone, and CIC. Some of these smaller companies have already come and gone, like Linus Technology, which went out of business earlier this year. After a forest fire, not all of the initial weeds find a secure home on the burnt-out soil.
If all the existing players are working on this new generation of machines, won't the new machines be just another milestone along the mainstream road traveled by the major players? No, because the new platform implies radical discontinuities in the multiple areas of software technology, hardware products, and the user population. These kinds of abrupt changes are hard for an established, large company to handle. Two years before the Apple I, IBM introduced a desktop personal computer called the 1501. It took several more years before IBM realized a more radical approach was needed.
Direct manipulation interfaces will require operating environments that are thoroughly object oriented, as opposed to yet another layer added on DOS. This will work against the skills of the major players and provide a blank slate for application developers.
Applications will be addressed to an entirely different set of users. The first mainframes were for the rocket scientists of the 1940s and 1950s. The minicomputer met the needs of Joe Engineer, while today's PCs are being used by Josephine Engineer, Accountant, and Office Worker. In addition to Joe and Josephine, users of the new machines will have names like Yamashita and Gonzalez. That is to say, because of the increased globalization of the economy and the diminished role of the U.S., it is likely that some of the major players will be based outside the U.S.. And even inside the U.S., there will be a whole new population of users in industries previously untouched by desktop technology: truck drivers, service workers, auto mechanics, field salespeople. This will imply a radical shift in the established channels (Businessland, Computerland, mail order), not to mention entirely new crops of application authors.
The design of applications will have to change -- moving from a focus on the keyboard/mouse to what some now call "pen-centric" design. For example, in today's desktop-oriented graphics programs, to draw a circle you have to first choose the circle tool from the palette window, then move over to the document window, and finally click-and-drag with the mouse. In a pen-centric application, you merely use the stylus to draw a circle (or a box or a line or some text) on the document and the system responds accordingly. It is an interesting exercise to rethink some of our favorite applications in light of this new user interface paradigm.
If forced to predict the ABC, CBS, and NBC of this nascent industry, I would say that there will likely be one or two U.S.-based companies, one from Japan, and perhaps another based in Europe. Vendors of tools might also be internationally distributed. Only the application vendors will be locally based. But with entire industries to automate -- from real estate sales to trucking to restaurants -- these won't be small potatoes. From this description, one may wonder if there is any place at all for the shoestring garage start-up.
The answer is that there will be many opportunities for small technology-intensive operations, if they form alliances with larger companies, in the areas of both manufacturing and applications. If IBM is to repeat its success with the PC, it will likely do so by licensing technology from a smaller vendor. Small enterprises like Go Corporation and Metaphor Computer have announced agreements with IBM. It's likely there will be others before this revolution plays itself out. This is not quite like the days of the Home Brew Computer Club, but it's as close as one can get in the fin-de-siecle.
And where will the next revolution lead us? Eventually to a place like where we now stand: A mature, stagnant mainstream, ready to be overturned by a new radical discontinuity. That subsequent discontinuity will involve a shift toward virtual reality interfaces (what-you-sense-is-what-you-get) and biocomputing technologies, but that's a subject for another time.
Michael Floyd
When recently asked if I thought object-oriented programming was just a passing fad, my response was a resounding "no!" Object-oriented programming is an evolutionary step in software engineering and, as such, the object-oriented approach is perhaps a key link connecting the preceding paradigms with those yet to come. Consider that as programming languages have evolved from assembly to modern languages such as C and Pascal, so has the notion of modularity. Modularity favors a "divide and conquer" approach that helps the programmer manage complexity by grouping a process or set of actions, usually into a subroutine or separate, relocatable module.
Out of this comes the idea of building reusable software components. With the help of abstraction, software elements within a program or project can be combined to create new elements. Object-oriented programming refines the software component concept by combining process with data. In fact, encapsulating data with the processes that act on it completes the software component idea, and the benefits of object-oriented programming (reusability and extensibility) are really benefits of component-based programming.
If you accept for the moment that object-oriented programming is more than just hype, the next question to consider is "where do we go from here?"
The next step in the evolutionary process may well be something called megaprogramming, a concept that views programming in terms of designing and composing software components, but on the grandest of scales. The term itself was introduced by Barry Boehm and William Scherlis at the June 1990 DARPA Workshop. Megaprogramming uses software components to manage the life cycle of systems, and promises to provide huge increases in programmer productivity.
In megaprogramming, megamodules take the notion of an object as an encapsulation of data and actions (in the form of functions and procedures) a step further. Megamodules encapsulate at a higher level the behavior, knowledge, and know-how within a community of software components. According to Peter Wegner of Brown University, "Megamodules are like nation-states. They have their own languages, traditions, cultures, and nationalistic loyalties."1
Megaprograms, then, are the programs that manage megamodules and model the interaction between systems. Imagine, for the moment, megamodules that simulate the interaction between organisms and the human immune system, or a megaprogram that models the world economy, with each country's micro economy representing a separate megamodule.
Megaprogramming, sometimes referred to as "programming in the large," involves managing programs of extreme size. As a consequence, development teams will also grow. Another factor that may be less apparent, however, is that the life of a system will necessarily be extended. Such extended-life systems must be easily extendable to accommodate change over longer periods of time, and issues such as data persistence must be considered. Therefore, a key concern of megaprogramming is managing the life cycle of megasystems.
So, what will megalanguages look like? In all likelihood, megamodules will support multiple paradigms enabling today's object-oriented (and procedural) languages to play key roles. In addition to addressing the issues of life cycle management, however, a megalanguage must support the interconnection of, and a common interface to, these large modules. Additionally, megalanguages will have to handle pragmatic problems such as those associated with concurrency, provide support for interrupts and exception handling, and deal with real-time systems.
If you're skeptical, note that according to the OOPS Messenger, the president's science adviser has proposed a $2 billion, five-year plan that includes megaprogramming as a primary goal.2
Of course, object-oriented programming presents its own challenges that must be resolved before we move to the next generation of programming. And, what if you're not sold on this object-oriented hype? The biggest stumbling block I see for objects is typified in the tired, but true, saying: "Garbage in, garbage out."
The problem is that objects place more weight on design than previous approaches did. Unfortunately, few have formal training in design, because our education stresses engineering. Hence, much of the design work occurs during the implementation phase. But, object-oriented software design goes beyond the process of organizing hierarchies, classes, and objects. Consequently, many programmers are finding that, although they have a working program, they must redesign to truly gain the benefits of extensibility and reusability.
In some sense, design is the simulation or modeling of a problem. And the success of a given design depends largely on how well the solution fits the problem, especially as the problem changes. To complicate matters, subtle aspects of the problem may not be apparent during the design phase, so the design must be as equally flexible and extensible as the system it models.
Hopefully, the coming years will teach us how and, perhaps more importantly, when to use our new-found wisdom. Certainly, object orientation is a missing piece, but it does not represent the entire puzzle, and you should keep in mind that we have yet to find the silver bullet.
Jonathan Erickson
Like it or not, the technologies that make up the fragile infrastructure of technological progress are barreling headlong into roadblocks that are legal, not technical, in nature. As a consequence of this rush, the spirit of innovation that's fueled software development since it began -- and at the breakneck speed we've come to expect -- may run out of gas, if it doesn't first come to a crashing halt. In any event, future programming efforts may be very different from today, as programmers discover they need to be clever paralegals first, and competent coders second.
Software patents and copyrights are at the heart of this legal labyrinth. Putting aside ethical questions surrounding software patents, a number of day-to-day, legal-related programming issues remain. To my mind, the most confounding problem is simply knowing whether or not the algorithm you're using has been patented. Of course, you'd expect the U.S. Patent Office to be the place to go to find answers to questions like this; at least that's what I thought. The answer I received, however, was that there is no simple way to find out. You can't say "give me a list of all registered software patents so that I can avoid using them," because no such list or database currently exists.
The way the Patent Office works is that all patents are assigned to a primary category (software, by the way, is "broadly" assigned to category #364) which is made up of classes and subclasses. The patent is then cross-referenced to one or more subsidiary categories, again with individual classes and subclasses. Many software patents, it turns out, are buried as subclasses within a subsidiary category in a patent for some kind of hardware invention.
The Patent Office isn't trying to keep trade secrets, well, secret; it does publish a list of patents after they've been granted. This list provides you with the first step for challenging a patent -- if you know it exists. You simply request a reexamination and provide prior art or other relevant information that the patent examiner might have missed. This is what competitors of patentees often do. The Patent Office doesn't publish a list of "applied for" patents; you have to wait until the "granted" list is made public.
(In defense of the Patent Office, examiners are overworked and there is a shortage of them. It typically takes an average of 18 months for a patent to be approved; for new areas like biotechnology, it can take up to four years.)
This takes us back to my original question: If you're a programmer implementing a familiar algorithm to draw a circle, for example, how do you find out if that technique has been patented? The answer is straightforward: There is no way. Because of this informational maze, your most common recourse will be, in all likelihood, to forge ahead and wait (but not hope) for someone's attorney to call. Not the safest tack, but the most expedient. In fact, this may be what you're doing right now -- you just don't know it.
(Maybe what we need is a "patent checker," somewhat like a spell checker, that works like this: As you begin compiling your source code, the checker looks for algorithms that, according to its database, match patented algorithms. When it hits one, the system pops into the debugger with the cursor on the patented technique and a message flashes the assigned patent number. Naturally, adding hypertext lets you click on the patent number to find out who owns the patent and other details. You could tie the checker into your bank account and automatically cut a check to cover the license fee. Or you might want to add a "patent thesaurus" for a selection of safe workarounds, user definable, of course. But I'm getting carried away with entrepreneurial inclinations....)
Copyrights raise equally confusing questions and I'm willing to bet that over the next decade, the big questions in this arena will involve the concept of public domain and whether or not it exists anymore.
Perhaps it doesn't. The way the copyright act is written is that any time you take pen to paper (or, in this day and age, fingertips to keyboards), you own the copyright to what you've created. To formally protect that material, you must register it with the copyright office, thereby enabling you to claim damages and recover legal fees if someone infringes on your copyright. But what if you've created something (like source code) and want to release it to the public domain for the betterment of your fellow citizens (programmers)? Sorry, there's no government form that lets you do this. The best you can do is choose not to enforce the copyright.
For the sake of argument, assume you've openly (that is, with the author's knowledge) used "public domain" code, but in software that hasn't been commercially successful; further assume that the original author didn't mind your using his code. The American dream being what it is, one of your programs -- one that incorporates this public domain code -- becomes wildly successful, making you both rich and famous. Wonderful, you say, until you get a letter from the copyright holder (or, more likely, his lawyer). Surprise, surprise -- he's decided to enforce his copyright after all, license fee attached. Is this the kind of public domain you want to trust?
Here's another copyright issue that's also up in the air. When you register for a software copyright, do you protect the object code or the source code (or both)? Most developers "publish" and distribute object or binary code versions of the source code; the source itself is kept secret. The question then is, does the copyright law in effect "decompile" the source from the object code? Maybe so, maybe no. Pick a card, take a chance.
I've only scratched the scruffy surface of the legal questions that are beginning to bedevil software developers (And computer users, for that matter. Try this one on for size: Who has the right to read that electronic mail you send and receive over the company LAN or over an online service? Just you? Can your boss or the owner of the company sneak a peek at your e-mail? This question is being answered in a couple of courtrooms right now and, to my mind, should be relatively easy, at least compared to the issue of software patents.)
My one hope is that the legal quagmires we're starting to encounter are potholes in the road, not chasms, and that we'll pass over them carefully, if not quickly. Unfortunately, it will probably take the next decade to sort out the answers. In the meantime, I'll wager that either some large patent-holding corporation will take a lone programmer or small development house to court, or a lone patent-holding programmer will sue a large software company. (Well actually, both types of cases have occurred, but with out-of-court settlements, not clear-cut decisions and answers.) I hate to say it, but court challenges may be the only way we'll get an answer. In any event, we'll all pay a price that I hope isn't too great as we travel this road, which I pray isn't too perilous. And I further hope we'll all be aboard for the ride for as long as it lasts, no matter where it takes us.
Copyright © 1991, Dr. Dobb's Journal