In October, this magazine took what you might call a controversial position on one of the most popular languages in use today. It was the annual object-oriented issue, and the cover shouted: "Beyond C++: Considering the Alternatives." The clear implication was that C++ was not the final word on object-oriented programming languages. Okay, I lied. You wouldn't call that a controversial position. Nobody who has a clue as to what "object oriented" means would argue that C++ is the final word on object-oriented languages. Still, to cast aspersions on a language that seems to be gathering momentum like a snowball on a Sierra slope is to fling down the defiant mitten of challenge. And we certainly flung it in that issue's nine articles on alternatives to C++, which ranged from the lofty and intricate structures of Eiffel, to no-class Drool. (Sorry, Dave.) The case, or at least one case, against C++ is that it is not as "pure" an object-oriented language as, say, Eiffel or Smalltalk. That's a point the language's biggest boosters can grant. The crucial test, they would say, is: Is it what I need? If what I need is a better C, what does it matter that C++ isn't pure? And C++ does add certain desirable features to C.
You can't argue with that, although you can still argue that C++ is not a really fine object-oriented language. But what kind of argument is that? Those who criticize C++ as a less-than-pure object-oriented language are judging it against standards that its designer may not have had in mind when he designed it, and that most of its users may not have in mind when they use it. Is it fair to judge a language by standards that its designer and many of its users don't hold it to?
Well, is it fair to hold inner-city gang members to laws they may not buy into? It is if we believe in the enduring and universal value of those laws. Judging C++ against object-oriented standards begs the question of the value of those standards.
Right about here I need to confess that I really don't care whether C++ is a "pure" object-oriented language or not. In fact, I'll have nothing more to say about the question in the rest of this column.
I brought up the issue as an example of a kind of debate that comes up in discussions (and the design) of programming languages. It's similar to the historical arguments about the dangers of GO TO statements. Such arguments compare one programming paradigm with another, or one aspect of one paradigm with an aspect of another.
Such arguments usually can't be resolved by writing some code.
It's much the same in science: You can't directly compare one paradigm with another because they use terms differently, have different goals, and approach their subjects differently. You usually can't design a clean experiment that decides which paradigm is the right one, the way that you can decide between rival theories within a single paradigm on the basis of one crucial experiment.
It's similar with programming paradigms: You can't decide which one is "right" because they use terms differently, have different goals, approach their subjects differently. In science and in programming, paradigms usually get replaced because long experience with alternative paradigms shows that one just seems to work better than another.
Nevertheless, some concrete programming techniques can be used to compare paradigms. I'll present one shortly.
First, though, a very specific criticism of C++: The fact that C++ is a better C gets in the way of its being seen and used as a truly object-oriented development environment.
It's possible to buy and use a C++ development environment without ever really dealing with the object-oriented features of the language. I've been working lately with the Symantec C++ development environment for the Mac. Some 93 percent of the documentation is generic to Symantec's line of C-based products. Reading the documentation, you could easily convince yourself that you had purchased Symantec's Think C compiler. Which, in fact, you have: It's part of the package. But virtually none of the documentation presents C++ as an object-oriented development environment. It tells what all the features are, but not why they're there.
I'm not faulting Symantec; that's how C++ is. It is a consequence of C++'s being a good enhancement of C that makes it really easy to use C++ without ever adopting the object-oriented paradigm; in fact, without ever learning object-oriented programming.
Merely having a C++ development environment does nothing to educate you about object-oriented concepts. Having a knowledge of C++ doesn't necessarily mean that you know object-oriented programming; and it's precisely because of this that good books on the subject are necessary.
Let me point you to Bertrand Meyer, creator of Eiffel and an author with a solid understanding of the theory of object-oriented programming.
In one of his books, Meyer uses an interesting technique for explaining the differences between two theoretical constructs in object-oriented programming--between aspects of two different programming paradigms. It's a technique that I think should be in any programmer's "intellectual" toolkit.
Here's what Meyer does:
In one chapter of his Object-oriented Software Construction (Prentice Hall, 1988), Meyer compares the concepts of inheritance and genericity. Inheritance, specific to object-oriented languages, lets you construct modules through successive specialization and extension. Genericity, a feature of Ada that was originally introduced in Algol-68, is defined as the ability to define parameterized modules, with the parameters usually being types. Both inheritance and genericity are ways of making software components more extendible and reusable. Both make use of overloading (more than one meaning for one name) and polymorphism (more than one form for one program entity).
Meyer asks the obvious question: If inheritance and genericity are two attempts to do the same thing, that is, to make more-flexible modules, how do they compare? Are they redundant? Incompatible? Complementary? Should one choose between them, or does it make sense to combine them?
Having asked the question, or questions, Meyer could immediately go on to answer them. However, he doesn't choose to do that; instead, he works through what you need to think about in order to answer the questions for yourself.
First, Meyer presents examples of the uses of genericity and inheritance, carefully chosen to demonstrate the most salient features and consequences of the two techniques. These are just the kind of examples you'd find in books on Ada and Eiffel, or Smalltalk programming. His genericity examples include parameterized routines and packages, and they touch on constrained and unconstrained genericity. For inheritance, he works through the design of a general-purpose module library for files, with classes like FILE, TEXT_FILE, DIRECTORY, DEVICE, and TAPE. The point of the examples is not to evaluate the techniques, but to examine them in enough depth that you feel that you have a grasp of their characteristics.
Next, he uses your knowledge of these characteristics to work through the process of simulating each technique in terms of the other: simulating inheritance using genericity, and simulating genericity using inheritance. (Having just worked through the examples makes it easier for you to see what would constitute an acceptable simulation.)
He approaches the simulation of inheritance by trying to construct inheritance in Ada, a language that doesn't have it. (Negatives in technology are always susceptible to time decay; let's say Ada doesn't traditionally have inheritance, and didn't in the version he used.) He asks whether Ada can be made, through its mechanisms of genericity, to simulate the characteristics of inheritance. Overloading, he says, is easy, but polymorphism is a different story. The closest he can come to simulating polymorphic entities is to use a record with variant fields, a feature that even Pascal has. This attempt, though, falls short in several ways. So he concludes that you can't, in fact, simulate inheritance using genericity.
Next, he shows how to simulate genericity with inheritance, using his own object-oriented language, Eiffel, as the vehicle. Perhaps not surprisingly, he demonstrates that genericity can be simulated by inheritance. Inheritance is the more general concept. The real point, though, is that you see the details of just how he simulates genericity using inheritance.
It isn't pretty. He needs to employ spurious duplications of code, and the conceptually simpler of two cases turns out to be just as complex to implement as the conceptually more difficult.
The moral of Meyer's lesson, or at least the moral that I draw from it, is not that Eiffel is better than Ada, or that inheritance is better than genericity, or even that genericity and inheritance are just different approaches to the same problem and have different strengths and weaknesses. The moral, I think, is that these techniques embody different ways of thinking about the problem at hand.
It seems entirely possible that an experienced Ada programmer just getting started with Eiffel might employ inheritance just as though it were a tool for simulating genericity. Such a programmer would end up writing unnecessarily complex, and probably inefficient, code.
And it wouldn't make much difference, I suspect, if that Ada programmer had been told that the effective use of inheritance requires a different way of thinking about problems than does genericity. Most of the time, we use the tools we know how to use in the ways we know how to use them, and we use unfamiliar tools in the same ways. If it can be used like a hammer, it will be.
But having worked through the exercise of implementing genericity and inheritance in terms of one another, that Ada programmer would have the conceptual background to be able to see why he probably shouldn't pound nails with the new tool.
Okay, schematically, what Meyer has done is this: To compare the concepts x and y, he implements x in terms of y and vice versa. This, I claim, is a special, computational, concrete case of a more general, noncomputational, abstract technique that you may be familiar with from other contexts: ensuring that you understand related concepts by defining each in terms of the other.
That technique is an old and a useful trick. The idea is, if you can figure out how to define x in terms of y, you can be assured that you understand x, at least in the context of y. But if you can also define y in terms of x, you have a context-free understanding of the relationship between x and y.
Meyer's technique is exactly the same thing, except that you're not just writing definitions, you're writing code; so you can be more sure that you've grasped the relationship: You can test programs more easily than you can test definitions.
Once you've implemented x in terms of y and vice versa, you are in a position to be able to see the implications of using one technique or the other.
I mentioned before that one paradigm usually replaces another only on the basis of people's experience with the two, and the perception that one seems to "work better." Meyer's method is a shortcut to the relevant experience.
I think that Meyer's trick is an important tool for examining theoretical issues concretely. The point is not to see how efficient each implementation is, but to see how it's done: to understand the architecture of each concept. The efficiency issue is a different thing altogether, part of implementation evaluation. This, of course, is very important. There's an example of it in that same October issue: Mike Floyd's comparative implementations of linked lists in a dozen languages.
Meyer's trick is something else. It's a tool for understanding programming concepts.
How else could Meyer's technique be used? How about in attacking the eternal debate in mental science and artificial intelligence between connectionist and modular paradigms? Is it possible to implement such models in terms of one another?
Not easily. Before starting, you would immediately run up against the complication that neither model solves the problem set: Neither is a model of the mind, neither tells how to build a Turing-test intelligent system. Still, it seems worth asking whether some sense can be made of the real practical differences between these approaches using Meyer's technique. I'm not equipped to answer the question, but maybe a sketch of the debate will inspire someone who is.
Actually, the two paradigms have many goals in common, and maybe they aren't such distinct paradigms at that. But each is a kind of metatheory, making no predictions, but simply characterizing what acceptable theories can look like. In that sense, they are not directly comparable and can legitimately be thought of as distinct paradigms. At least that's how I understand it.
The modular paradigm assumes that intelligence is made up of parts and that the parts can be understood or implemented separately. Virtually all computer models of mental phenomena and virtually all work in artificial intelligence before the advent of neural-network models was modular. Programming languages are probably inherently biased toward decomposing problems and implementing solutions through distinct modules.
The connectionist paradigm assumes that intelligence is a matter of which inputs get hooked up with which outputs, and how. Neural nets are the programming realization of connectionist thinking.
Both modularity and connectionism apply equally well in principle to artificial intelligence and to the natural type, but you can only trace the history of the AI cases back a short distance in history. Most of the history is in theories of natural intelligence.
Plato and Aristotle were modularists: They both described the soul as tripartite. John Locke was a more recent philosopher who tried to define the faculties, or functional modules, of the mind. Phrenology took the idea to a ridiculous extreme, trying to read personality from bumps on the head, which presumably were associated with oversized brain modules. Intelligence testing in this century was an attempt to discover, through the statistical method of factor analysis, the factors that made up intelligence. Noam Chomsky's claims for the special nature of speech are consistent with the modular paradigm.
Connectionism is most clearly seen in the psychological school of strict Skinnerian behaviorism, which takes as its purpose the elucidation of the connections between inputs (stimuli) and outputs (responses). Neural-network models aren't as strict (one might say blind) as Skinnerian behaviorism, but do share some of its biases. Learning is a matter of increasing the strength of some connections with respect to others. The raw material of mind is homogeneous. There are no hardwired subsystems of thought.
So: Is it possible to implement a connectionist model using modules, and vice versa? Obviously, nobody's going to implement a full connectionist model of the mind in terms of modules or vice versa; but implementing aspects of the paradigms would be interesting enough. Is that possible?
Yes, apparently. In principle, any modular theory can be modeled by a connectionist system. In fact, researchers in these fields have done one or the other, although it's not clear that both sides of the trick have been done, which is the point.
What I'd like to see is something like a neural-net model of Chomsky's generative grammar along with a modular model of the connectionist account of language acquisition.
That would be cool.
Copyright © 1994, Dr. Dobb's JournalComparing Paradigms
C++ Blinders
Meyer's Method
Meyer's Moral
Modules vs. Connections