Two years ago, in Seattle for a programming conference, I took a cab from the airport. The driver, a burly fellow with tattoos on both forearms, noticed my paraphernalia and said: "You have a computer? I'm running Windows 3.0 in Enhanced Mode and really love it." It was then I knew, a full year before the release of 3.1; that Windows would fully dominate the world of desktops.
I haven't talked to any cabbies who are using templates, exceptions, or runtime type identification, but I don't consider the question, "Will C++ take over the world?" an open issue any more.
Remember "The Year of the LAN"? For the last ten years, next year was predicted to be it. The Year of the LAN never actually arrived, but today you look around and see networks in place everywhere.
So it is with C++ Although no one has predicted "The Year of C++" (and such a year may never be a visible entity anyway), the time is not far off when using C for mainstream application programming will be as quaint as using assembler. Ten years ago everyone in the mainstream was coming to grips with C (which was by then old hat to researchers), but all successful programs were still in assembler (Lotus 1-2-3, Wordperfect, Wordstar, NetWare, DOS). Now mainstream programmers are absorbing C++ (which is now old hat to researchers and academics), even though successful programs are still mostly written in C (Pagemaker, Excel, Word, Windows).
So the question is not "Will C++ take over the world?" but rather, "Why?" Given other alternatives, why choose a language that's been called. "One of the most grotesque and cryptic programming languages man has ever created" (Ray Duncan, PC Magazine, August 1991)?
I must confess I'm with Ray on this one. I've been using C for 12 years, and object-oriented languages (Smalltalk, Objective C) on occasion since 1984. I've no quarrel with the basic OOP goals of encapsulation, information hiding, reuse of code via inheritance, and polymorphism; that's what I try to achieve with my C coding style. And I realize that C can't tackle these problems unless the language is extended. It's the resulting melange that rubs me the wrong way--despite the great care and deliberation used by Bjarne and company in understanding changes to the language.
Certainly there are some who feel as I do. Tom Cargill writes: "If you think C++ is not overly complicated, just what is a 'protected abstract virtual base pure virtual private destructor,' and when was the last time you needed one?" (C++ Journal, Fall 1990). David Smythe says, "C++ is incredibly complex... The number of caveats which productive C++ programmers must grasp is simply too large" (C++ Report, February 1989).
I recently met with Michael Tiemann, author of GnuC++, president of Cygnus (which sells support for GnuC++), and one of the handful of people who've actually implemented a C++ compiler. The meeting was not on the subject of language design, but he nevertheless volunteered the opinion that C++ was grotesque and baroque when compared to cleaner languages such as Smalltalk and Objective-C.
Yet despite these remarks, all the people I've quoted are actively using or working with C++. Tiemann's company is a leading player in C++ for embedded systems, Cargill just published a well-received book, C++ Programming Style. Duncan completed an illuminating three-part series on C++ for PC Magazine, and so on. To understand why this makes eminent sense, I reread a prescient paper by Dick Gabriel, long-time Lisp hacker extraordinaire and founder of Lucid, which sells (among other things) a C++ compiler.
This classic paper, entitled, "Good News, Bad News, and How to Win Big," was circulated on the Internal last year. Gabriel begins by talking about the MIT/Stanford style of design, an approach which can be called, "Do the right thing." The tenets of this philosophy are simplicity, correctness, consistency, and completeness. This design approach holds all these qualities to be much more important than easing the programmer's burden in implementing a given design. Results of this design approach include the Scheme language, CommonLisp (with CLOS extensions), and the ITS operating system used to run the PDP10s at MIT's AI lab.
In contrast to this philosophy, there's what Gabriel calls the "worse-is-better" approach--also known as the "New Jersey" school of design (after AT&T's Bell Labs facilities in that state). In the New Jersey approach, simplicity, correctness, consistency, and completeness are all laudable goals; but all can, on occasion, be sacrificed. Gabriel continues:
[In the New Jersey approach] it is more important for the implementation to the simple than the interface... It is slightly better to be simple than correct.... Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency. Completeness must be sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.... The programmer is conditioned to sacrifice some safety, convenience, and hassle to get good performance ~ ~
In other words, if doing the right thing becomes too complex, punt, and let the user of the API (or programming tool or language) bear the burden. Gabriel considers UNIX and C to be the exemplars of this design method, with some distaste.
But he then goes on to say:
However, I believe that worse is better, even in its strawman form, has better survival characteristics than the-right thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.... UNIX and C are the ultimate computer viruses.... It is important to remember that the initial virus has to be basically good.... Once the virus has spread, there will be pressure to improve it, possibly by increasing its functionality closer to 90%, but users have already been conditioned to accept worse than the right thing. Therefore, the worse is better software first will gain acceptance, second will condition its users to expect less, and third will be improved to a point that is almost the right thing.
Gabriel's dispassionate analysis arrives at its logical conclusion: "The good news is that in 1995 we will have a good operating system and programming language; the bad news is that they will be UNIX and C++."
This explains why any new programs I write over the coming year will be in C++. I don't think I'll be alone in this regard.
Copyright © 1992, Dr. Dobb's Journal