Amoebas at the start Were not complex; They tore themselves apart And started sex.
--Arthur Guiterman
In my last column in February, I reported on my interviews with Chuck Duff of The Whitewater Group and Jim Anderson of Digitalk, two proponents of a more-or-less-pure object-oriented programming paradigm. Their loyalty to this more or less pure paradigm is consistent with the fact that it's the paradigm to which each of them has hitched the chariot of his reputation and personal fortune. Both Actor (Duff's baby) and Smalltalk/V (Anderson's) are more or less pure object-oriented languages. It is plausible that others, whose chariots are otherwise harnessed, might disagree with some of Duff's and Anderson's views, and such is the case. That's what makes chariot races.
Both Anderson and Duff express strong reservations about multiple inheritance, the capability for an object to inherit from more than one ancestor. Actually, Smalltalk/V has the facility hidden within it, lacking only a user interface to make it available to the programmer; and there's a multiple-inheriting Actor in the wings. But Anderson and Duff seem less than enthusiastic about biting into the apple of multiple inheritance. There's the matter of name clashes, for instance. As Duff explains it, "there are cases in which two multiple-inherited classes with instance variables--with the same name each--may want to preserve their own copy of that instance variable, and there are equally viable cases in which they want to share it." There is no algorithm in existence for resolving such conflicts.
When an object begets new objects all by itself, things are simpler. Multiple inheritance is mysterious and messy. If its lure is strong, there may be wisdom in resisting temptation. "Dragons lie there," Duff says, expressing a diffidence perhaps appropriate for a paradigm still in its adolescence.
I don't mean to put down adolescence, or its preoccupations. After all, I just wrote a book about HyperTalk, the semi-object-oriented language of a product whose creator has called it a "software erector set."
But there comes an end to adolescence.
Unlike Duff and Anderson, Bertrand Meyer has gone all the way. Eiffel is the name of Meyer's object-oriented programming language, which incorporates multiple inheritance. In fact, Eiffel depends intimately on multiple inheritance in its own structure. While Duff and Anderson wonder if the customers who ask them about multiple inheritance are just indulging in ivory-tower fantasy, for Meyer it is a fact of life.
In the November/December 1988 issue of The Journal of Object-Oriented Programming, Meyer presents the view from the Eiffel Tower:
"Whenever you talk about multiple inheritance, someone is bound to ask sooner or later (usually sooner) what happens in the case of name clashes -- identically named features in two or more parent classes. No doubt the question is legitimate, but the gravity with which it is asked -- as if it were a deep conceptual issue -- has been an unending source of bewilderment to me. I believe it is one of these cases in which, if you only take a minute or two to pose the problem cleanly, the solution follows immediately.
"First, it is purely a syntactical problem, due to conflicting name choices. It has nothing to do with the fundamental properties of the classes involved. Second, nothing is wrong with the parents; each is perfectly consistent as it stands. The 'culprit' is the common heir, that tries to combine two classes that are incompatible as they stand. So the heir should also be responsible for the solution."
Eiffel's solution is to reject such ambiguities with a complier error message, and to require the programmer to resolve them, possibly by renaming one or both features (my_father's _temper, my_mother's_eyes). Duff has asked, "Do you ask the programmer on a case-by-case basis to make the resolution?" Meyer obviously thinks that's a reasonable solution.
Meyer says that users of a class should not have to know its ancestry; the interface to the class should be complete and consistent on its own terms. The simple expedient of appropriate renaming is enough, he says, to ensure that name clashes don't get in the way of this goal.
Renaming is a purely syntactic business, and Meyer admits that "the improvement it brings...maybe labeled a cosmetic one." But this is not, in his view, to dismiss it as trivial. Meyer does not disdain cosmetics.
In the same column, Meyer ridicules nonsensical examples of multiple inheritance, such as the class apple_pie inheriting from apple and pie, or class airplane inheriting from fuselage and engine, pointing out that an apple pie is not an apple and an airplane is neither a fuselage nor an engine. And he presents some real examples in which multiple inheritance seems called for. One of these is the class window, which in the implementation he describes, inherits from the classes rect_shape and tree. In this implementation, a window is a rectangle, but it is also a tree, with properties such as superwindows and subwindows and facilities for adding and deleting subwindows. Meyer's description of a lazy programmer putting together a windowing system in a day by drawing on existing rect_shape and tree classes gives him the opportunity to show the need for renaming, even when there are no name clashes.
Without renaming, the window class inherits tree features with all their arboreal nomenclature clinging to them. A superwindow is called a parent_node, the method for adding a subwindow is called insert_node. Anyone using this class would likely find this confusing. The user of a class has a right to expect it to be complete and consistent on its own terms. Leaving the tree terminology in the window class is a pointless flaunting of ancestry, Meyer believes.
Meyer is not an unbiased reporter, but clearly one who has tasted the fruits of multiple inheritance and cannot go back to the simpler, more innocent world of single inheritance. As he puts it, "life without multiple inheritance would be...boring."
The centerspread of that issue of The Journal of Object-Oriented Programming is a picture of the Eiffel Tower. It looks like something that an adult erector set would produce. I wonder why Meyer chose that image to symbolize his product.
With the end of adolescence comes a desire for privacy. The matter is one Chuck Duff has been giving a lot of thought to recently.
"There is no privacy provision in SmallTalk," Duff says. There's no privacy provision in Actor, either, but Duff and the programmers at The Whitewater Group are working on that. Duff thinks that privacy is important to the future of object-oriented programming, because without it "once you write a method, it is visible to all of your descendant classes. The same is true of instance variables. Without more control over privacy it becomes very difficult to do things like multiple inheritance well."
I suppose it would. Duff gives details:
"There are really three categories of things in an object-oriented system: There's an object that is of the class for which a method was originally written. That's the most local. Then there are descendants of that class; they're not as local; they're almost like outsiders, but they're privileged outsiders in SmallTalk. And then there are objects of completely different classes.
"Any good object-oriented language will make things opaque to outsiders; that's the whole point, that's the abstract data type layer. So you can't look into the representation of something that you're not related to. The problem is that in SmallTalk, there's no distinction between an object of the class and an object of a descendant class, so you have full visibility to all those inherited methods and instance variables, and that really isn't appropriate."
The first level is drawing the curtains against the neighbors, the second is closing the bedroom door against the kids. In the interest of making the code more maintainable, programmers at The Whitewater Group are currently considering how to implement that extra layer of privacy, the bedroom door.
And where are such changes as multiple inheritance and privacy taking object-oriented programming? Closer to the ideal of a system of software components that can be reused to solve problems similar to the one for which they were first developed, and by programmers other than the developer, and that can be adapted to new uses without actually being modified? Toward a system that minimizes the impact of change in software development? That would be nice.
One person who has thought hard about the idea of reusable software components is Brad Cox, whose book Object-Oriented Programming: An Evolutionary Approach gets at it via the concept of the software IC. Although his book claims to be about object-oriented programming, Cox takes a less pure approach than Meyer, Anderson, and Duff, presenting all his examples in the hybrid language Objective-C and advocating what he calls a "hybrid defense" against change. He uses the word "defense" frequently in discussing software development; for Cox, some protection is required if we are to do it responsibly.
Cox's idea of software ICs may be a bigger idea than OOP. In spelling out some of the desiderata of software building blocks that can serve as the base for a pyramid of software development, he characterizes the pure object-oriented approach as building "armor-plated objects that communicate by sending messages." He describes conventional programming as building "efficient but brittle software systems, surrounded by static defensive structures that protect them from change." Encapsulation, inheritance, and dynamic binding are techniques that can overcome the deficiencies of conventional programming when change is necessary. But encapsulation is the base on which a software IC approach must be built:
"Encapsulation is the foundation of the whole approach. Its contribution is restricting the effect of change by placing a wall of code around each piece of data. All access to the data is handled by the procedures that were put there to mediate access to the data." Just like IC design, "object-oriented programming...is a way for suppliers to encapsulate functionality for delivery to consumers."
But just like ICs, such components have to be bug free. We've all encountered the programmer folk wisdom that "a fully debugged program is one that hasn't failed recently." But we all want to believe that this is just cynicism, that it is possible to build bug-free software components to seal up black boxes. It's pleasant to imagine that a programming system could be built that meets these desiderata, that by some reshuffling of the bits we could remold programming nearer to our hearts' desire. That we could build to last with solid blocks. At the moment it seems a poetic fancy.
Ah love! could you and I with Him conspire To grasp this sorry scheme of things entire, Would not we shatter it to bits -- and then Re-mould it nearer to the Heart's Desire!
I suppose we would. But this was Fitzgerald's third version of the stanza; he continued to issue updates and bug fixes over a period of 20 years. Cox points further to the need for assigning responsibility in human-computer systems, something that makes no sense without bug free, absolutely reliable software components. Is object-oriented programming the solution? "Absolutely not," Cox says. Is there a solution?
"Neuralnetworks."
It was almost the first word I heard when I picked up the phone that morning at I know not what hour. Pedants will say that "neural networks" is two words, but it sounded like one word to me, which it may be in German, although, as the fog cleared, I realized that the speaker, while indeed German, was speaking English.
It was my friend JurgenFey, an editor for PC Magazin, a German affiliate of DDJ. Jurgen's English is excellent, but some of what he was saying skimmed over my befogged head that morning.
I did get it that Jurgen was putting together a special Neural Networks issue of PC Magazin. None better for the job, I thought, since Jurgen has been deeply immersed in neural network research for over a year now. He's done hardware and software development in support of neural net systems, and has read extensively in the theory of neural nets. Jurgenthinks that neural nets have a lot of potential, but no, he doesn't see neural nets as the ultimate answer.
As part of his work in putting together the special issue, he told me, he had ten calls to make that day, all to the United States. For some reason, he had started with me. I'm sure the other nine people had to offer him more, and I have no hesitation in recommending the issue to anyone interested in neural networks. It should be on the stands in Germany in mid-March, and it can be ordered through M&T Publishing. "Of course, it's all in German," Jurgenapologized, with the usual pause in which he allows me to reflect upon the linguistic shortcomings of Americans in general and me in particular.
Jurgen had no hesitation in recommending to me an excellent article on new neuron models, though, of course, it was in German. I think he said that he'd summarize it for me when he comes over in the spring. I know he said that a remarkable amount of neural network work (he must have said it better than that) takes the McCulloch-Pitts neuron model as gospel. In fact, McCulloch and Pitts didn't present the model very seriously back in 1943, and anyway, "something must have happened in the last 40 years."
He's right about McCulloch and Pitts.
Warren McCulloch and Walter Pitts published the seminal paper on neural nets in "The Bulletin of Mathematical Biophysics" in 1943. The paper was titled "A logical calculus of the ideas immanent in nervous activity," and it really was about logic, not biophysics. McCulloch and Pitts departed from physiological concerns to examine what the physiology might be doing -- what the hardware might be computing. To do so, they presented a mathematical model of the neuron.
The model they presented described the neuron as a fixed-threshold binary device. When its inputs exceeded some fixed threshold of activation, the neuron fired, and that was it. The inputs could be excitatory or inhibitory, but all excitatory inputs had the same weight and any inhibitory input had effectively infinite weight. If an inhibitory input was active, the neuron did not fire. Finally, time was quantized, so that a neuron summed its inputs and responded during a phase of fixed length; neural activity was not, in the model, continuous.
This model was sufficient to implement the propositional calculus, which meant that combinations of neurons could model any finite (propositional) logical expression. Since aspects of the model were based on actual neural research, there were the implications that the brain could be understood as we understand computers, and that computers could be built along the lines of organization of the brain. A lot of work was predicated on the assumption that the McCulloch-Pitts model of simple neurons connected in a complex net was a useful computational model. A lot of work was also predicated on the assumption that their simple neuronal model was correct.
The McCulloch-Pitts neuron model is now known to be quite wrong. Neurons are not simple logic elements. Their action is not all-or-none, and they are closer in function to voltage-to-frequency translators than to logic devices. The simple model has proved pregnant for computer science, spawning a great deal of neural nets research and some early application work that seems promising. But for modeling the brain, and possibly for some purely computational purposes as well, these new, truer models of neurons need to be examined.
The new models are not simple binary threshold models. They add parameters to the old McCulloch-Pitts neuron and more closely model real neural response. There are even analog models.
The thought of building a complex system out of analog neurons made me nervous. Analog technology always makes me uncomfortable; if it's ones and zeros I have a hope of understanding, but analog is inherently inscrutable.
But the idea of analog artificial neurons was making me more uncomfortable, and in explanation I can only offer Lee Felsenstein's theory. Felsenstein, who created the Sol and Osborne 1 computers and the Pennywhistle modem, and many other things, has a theory. The gist of it, as I brought it up through that morning's fog, is that men build things because they can't have babies.
It's pointless to speculate about whether or not computers will ever be intelligent. No one knows what "intelligent" means. But it seems to me likely that computers will one day grow beyond their current tool status to become entities to be dealt with at a level of interaction now reserved for other people. I don't suppose we'll see it. If such artificial entities will one day share the earth with Man, their birth is still a long way in the future.
But I couldn't help wondering that morning if the gestation period had begun.
Copyright © 1989, Dr. Dobb's JournalThe French Have a Word for It
A Fig Leaf for Actor
Responsible Cox
The German Has a Word for It