There was an edge to some of the audience's questions. It didn't faze the panel of experts.
More than one member of the audience for the panel discussion on software development in the 1990s at Miller Freeman's Software Development '88 conference (SD '88) asked essentially the same question: "Today, the emphasis in object-oriented programming seems to be on software development as a creative activity. Can't we get a little more scientific, can't we move toward an engineering approach to object-oriented programming? More specifically, are there any rules or strategies you can point to that will help us decide what objects to define?"
The questioners got little satisfaction. The panelists' answers ran thus: Chuck Duff, author of the object-oriented programming language Actor: "A good plan is to study the physical system you are trying to model and create the classes of objects it has."
Dick Gabriel of Lucid, deeply involved in object-oriented Common Lisp: "That's a fundamental question for which there is no easy answer. I try things."
Bjarne Stroustrup, author of object-oriented C++: "It's a holy grail. There is no panacea.
Chuck Moore, author of the Forth programming language, which doesn't have to be object-oriented: "Programming is an art; we might hope it becomes a craft; it will never be a science."
The panel members represented several programming paradigms besides the object-oriented one. Although there are object-oriented Forths and Lisps, those languages are surely paradigms unto themselves; and moderator Stan Kelly Bootle was there to represent the procedural paradigm single-handedly if he had to. (He's written extensively on Modula-2, has just finished writing a book on C, and was wearing an Ada T-shirt.) But with their combined knowledge of the object-oriented paradigm, one of them should have been able to field the insistent question if anyone could.
Maybe no one can. Maybe there is an invariant principle stating that, for any given paradigm and for any given state of the programming art (excuse me, discipline), there are aspects of the programming process that can be formalized and other aspects that can't. If so, surely a great deal of programmer effort has to be directed toward the latter. That's where only creativity will suffice. Maybe, as the panelists seemed to be saying, the decision regarding what objects to create when developing an object-oriented system is such an aspect.
Maybe. And maybe when you cross paradigm boundaries, the aspects shift. Then the first disorienting puzzle presented to you by a new paradigm would be to identity the problems for which only creativity will suffice.
Some of you may have attended SD '88. Ron and Jon and Allen and Tyler and I were there. SD '88 has grown in three years to become a truly important and informative conference for software developers. Of course, some of the sessions were less important and informative than others, and unless you knew the speaker, it was a turkey shoot. The networking opportunities were good, though, as was the chance to see people whose work you've read. I had never seen Bjarne Stroustrup before.
I hope the conference sponsors can find ways to increase the conference's networking value next year. In addition to the sessions, the conference had exhibit space for companies, the main value of which was probably informal recruiting. Plans are for a greatly expanded exhibit program next year, and if that takes the direction of a sort of job fair it could be interesting.
This year there were tracks of lectures and workshops on artificial intelligence, database design, the C language, design methodologies, languages, and graphics. For the paradigmologist there was much to record. I attended many of the sessions I mention here, but since sessions ran concurrently I couldn't attend everything I was interested in. I'm summarizing some of the sessions from the proceedings.
While the speakers in the panel discussion I mentioned earlier weren't able to provide an engineering approach to deciding what object to develop in an object-oriented design, several sessions did deal with practical object-oriented programming issues.
Satish Thatte of TI talked about object-oriented database systems. OODB, Thatte argued, is a necessary step making smart front ends truly viable; conventional database architectures with Al front ends grafted on are handicapped by inflexibility. Citing the ten years it took relational database technology to be accepted commercially, he predicted that it will take OODB five to ten years to reach the market.
OODB represents a significant paradigm shift for database developers. Object-oriented programming may be the paradigm shift challenging the largest number of programmers today.
Chuck Duff and Mark Solinski led a workshop on Actor development. Along with the developers of Smalltalk, Chuck has the distinction of having developed a commercially successful language strictly for object-oriented programming. Actor is a pure object-oriented language, down to the activation records on the stack (they're objects, too). I missed his talk, so I called him alter the show and he gave me a little more insight into object-oriented programming in general and Actor specifically.
Chuck talked about multiple inheritance, the ability of an object to inherit from more than one parent, and about why he left it out of Actor and has no plans to add it. "At the implementation level," he said, "it turns simple tree traversal into arbitrary graph traversal. Somehow you have to linearize the graph. Smalltalk80 did it by copying code, physically copying the methods." Duff called this a cop-out. Linearizing the graph is not impossible. "You can unfold the graph. But it adds code bulk," he said.
At the user level, Duff sees another kind of problem. He fears that users will view multiple inheritance as a panacea and misuse it. It is difficult to avoid conflicts, and the efforts necessary to do so may "make you wonder if you are really simplifying anything." He acknowledged that Lisp systems such as Flavors have implemented multiple inheritance, but he says he's seen some unreadable Flavors code result from that decision.
Having published last month some of Bjarne Stroustrup's views on what makes a language object-oriented, I asked Duff to talk about the defining elements of the object-oriented paradigm. "I think dynamic binding is fairly essential," he said. The Actor compiler works hard to convert dynamic bindings to static for efficiency, but the ability to use dynamic bindings supports what Duff calls "experimental programming." "Inheritance is necessary for reusing code. Ada packages are flat [do not support inheritance]," he added. Of course, the Ada people might disagree about the importance of inheritance, but he thought it fairly essential. "Encapsulation is widely accepted." Encapsulation, in conjunction with inheritance and dynamic binding, he said, is very powerful.
Returning to the question that had nagged the panel, he said, "We can be more scientific about it. We teach a course, and teach people to start with the physical system. The way its objects have evolved is probably a good way [to begin]." When the physical model is in need of redesign, he recommends doing systems analysis to find a better segmentation of the problem.
There will be more help for the object-oriented software engineer at this year's OOPSLA conference in San Diego in September, he said.
Back to SD '88: Another speaker, Rick Potter, discussed structured design for object-oriented programmers, pointing to a partial answer to the question that led off this column, but only a partial one. Yes, we can develop measures of goodness for objects and their interaction. No, that won't tell you what objects and classes to create.
Object-oriented C was covered at SD '88 from several directions.
Lawrence Rosler predicted that the C programming language of the next generation will abandon at least one of the features that made the language popular: its compactness. C will get big, and will encompass alternative programming paradigms, certainly including object-oriented programming.
Bjarne Stroustrup gave an overview of C + +, the proposed successor to C. He listed some of the features he left out of C + +, including garbage collection, multiple inheritance (but AT&T has plans to add this), support for concurrency, exceptions, parameterized classes, and integration of the language with a programming environment. The benefits of these constraints, he explained, were compatibility, internal consistency, and efficiency. There were other workshops and lectures on C+ + and Objective C.
I found three talks that dealt with issues of parallelism.
Robert Ward had played around on the parallel machines at the Advanced Computing Research Facility at Argonne Labs and talked about programming large-scale parallel architectures such as Encore, Cray, and Hypercube. The focus of his talk was on shared-memory implementations, not communicating processes (although he claimed that the approaches are in some sense duals of one another, and that you can simulate one approach with the others. He argued the case for extending C to handle this sort of parallelism: the machines he was discussing all supported some form of Unix, which favors C, and C would make the programs more portable.
Since most parallel-processing work is in the experimental stage or is done for research projects where the developers don't see much need for portability, he had to justify this approach. Portability and maintainability are linked, he pointed out; also, the architectures are not stable. Moreover, developing a portable approach to parallel processing will facilitate the development of benchmarks for evaluating parallel architectures.
Finally, he answered the objection that machine specificity is necessary to get the performance benefits of parallelism. There are algorithmic benefits accruing from the use of a parallel approach, he said, but the benefits will be masked by fiddling with machine-dependent optimizations.
Mark Gluck and David Parker spoke cogently on neural networks, Gluck explaining why cognitive psychologists and neurophysiologists care about the stuff, and Parker sketching an algorithm.
Gluck argued that these models are more appropriately called parallel-associative networks since they are in some ways not very neural-like at all. He sketched a brief history of associative net models, starting with the perception model of the early 1960s, which was unable to handle exclusive-OR; he told how Minsky and Papert shot down this model and everyone more or less abandoned the approach for a decade, and how it recently resurfaced when new algorithms were developed that implemented multilayer nets that do handle exclusive-OR.
Gluck has developed, with psychologist Gordon Bower, a model that accurately predicts human decision making in a medical prediction setting where the disease to be predicted is rare. Working with neurophysiologist Richard Thompson, he has applied a neural net model to sea slug neuron firing, with enlightening results.
Parker discussed an algorithm he developed for neural nets. He summarized the neural net approach succinctly. All learning is minimization, he said, generally minimization of error, and we have many good algorithms for minimization. The neural net approach is nothing but the parallelization of a minimization algorithm. His own algorithm is a parallel version of the steepest descent algorithm.
He pointed out that there was absolutely no performance advantage to the neural net approach over sequential minimization without parallel hardware. There may be design advantages, though.
Avram Tetwsky talked about tasking, the Ada facility for concurrent programming based on Tony Hoare's CSP language. He warned against the use of the simple task construct, in part because it limits your flexibility in passing data between tasks.
Several speakers presented what one would normally think of as AI languages, and may of these speakers concentrated on non-AI uses of the languages. It looked as though the conference organizers had asked the speakers to demonstrate that AI tools could really be used for serious purposes.
Dick Gabriel talked about Lisp as a general development language and as a systems language. He showed how to develop a sort of generalized spreadsheet using the Common Lisp Object System.
It was Gabriel, incidentally, who loudly ridiculed Sun's plan to rewrite Unit in C++. Structured programming, he claimed, had threatened to stop cold the pace of advancement in software development in the mid-1970s, but, as it turned out, only retarded it for five years. C and Unix, he said, will stop us cold for 25 years. It was Dick Gabriel who said that, remember---not me. I'm just an innocent paradigmologist. Gabriel's at Lucid Inc., in Cambridge.
John Malpas presented Prolog in one workshop as an application language and in another in a software engineering context. He pointed out that the self-descriptive quality of the language makes it possible for a Prolog program to document itself to some extent.
To find out about SD '89, write to Miller Freeman, Seminar Dept., 500 Howard St., San Francisco CA 94105, or call 415-397-1881.
You know John Malpas's work: He did an article for us on Prolog. He and Dave Cortesi and I have made the most fuss over Prolog in these pages, and I wonder if I shouldn't feel a bit guilty about my part. There are a lot of people laying with Prolog today, and I choose that verb deliberately.
In a previous life, I was a consultant in research design and data analysis. It troubled me that many of my clients, all graduate students and faculty members, wanted to perform statistical analyses whose assumptions they did not understand. Now, as someone who has encouraged the widespread use of Prolog, I must take some of the guilt for the legions of Prolog programmers who don't know what resolution is.
To relieve my guilt, I'll tell you about a new book on Prolog that just came in. The book is Prolog Programming in Depth by Michael Covington, Donald Nute, and Andre Vellino (Glenview, Ill.: Scott, Foresman, 1988).
About half this book is spent defining the language, which the authors do well and at a level an experienced software developer can appreciate. The discussion is strong on practical tips and bibliographic references, and on how features have been implemented in different compilers. There are also appendixes on debugging and on features of Arity's and Borland's Prolog products.
The other half of the book presents artificial intelligence applications. There are no surprises in the selection of topics---search heuristics, expert systems, inference engines, natural language processing---or in the example programs the authors include.
The book is informed and informative. For example, the authors raise doubts about the confidence factors widely used in expert systems. Abundant research shows that people, expert or not, are poor at assessing conditional probabilities, and in fact at assigning numbers to just about anything. Confidence factors ought to be examined with a skeptical eye, and these authors are appropriately skeptical.
They spend just one chapter laying the logical foundations of Prolog, but they deal with the implications of its logic throughout the book.
They do explain resolution and how Prolog uses resolution, producing proof trees via SLD resolution. They explain that SLD is sound (never letting you infer something that doesn't logically follow from other statements) and complete (finding all possible inferences), and explain how Prolog's implementation of SLD resolution is sound but not complete, and they tell why it was implemented that way. They talk about the closed-world assumption and the way Prolog handles negation, and what these things imply.
Nevertheless, I wish the authors had said more.
They give practical advice on the use of the cut operator, but don't fully clarify the effects of cut, which some people fear can compromise the logic of a program. They should have said that the cut operator has no logical significance whatsoever: Its use cannot change the logic of a Prolog program. Cut just prunes the proof tree, with a gain in efficiency but a loss in completeness.
What the authors refer to as "red" cuts are a special case. Here the programmer consciously writes code that is declaratively incorrect (logically incorrect), depending on his knowledge of the order of clause evaluation to keep the program from crashing. I wish the authors had come down harder on this kind of programming, which undermines any notion of Prolog as programming in logic, and ties the code to nonparallel implementations.
The principle of negation-as-failure and the closed-world assumption (CWA), both relevant to the logic of Prolog, are not equivalent. Frankly I can't tell you how they differ, although I know that CWA is more powerful. But unless I missed it, the authors of this book don't clarity the point, and I think it is worth discussing in a book that examines Prolog programming in depth.
Despite these points, Prolog Programming in Depth is a gold book. For the time being, though, the Prolog programmer who really wants to understand the logical structure of the language he or she is using may just need to read a book on the relevant aspects of logic. One on my shelf is Foundations of Logic Programming by J.W. Lloyd (New York: Springer-Verlag, 1987).
For the less committed, two good books that explain resolution briefly (in a chapter or appendix) are Mathematical Theory of Computation by Zohar Manna (New York: McGraw-Hill, 1974), an older book with 20 solid pages on resolution; and Natural Language Understanding by James Allen (Menlo Park, Calif.: Benjamin/Cummings, 1987). You would not buy either book just for their treatment of resolution, but both are books I thought you might have access to, and the Allen book is worth getting if you have any interest at all in natural language processing. I recommend them because I don't think it's reasonable to expect people who are merely experimenting with purchasers of Prolog products today bought them precisely to experiment with the language.
With any new paradigm, the first thing you want to do is experiment with it; see what its model problems look like and where it demands creative thinking, or creative rethinking about familiar problems.
Parallel processing is a radical paradigm shift, encompassing not just one class of new paradigms but a whole curriculum of them. The most radical of these can force you to rethink fundamentally how you approach familiar problems. I opened the topic of parallel processing here last month, talking about the INMOS transputer chip, and about occam, the language developed for programming transputers. I thought I had said about all I could until I actually got my hands on a transputer development system to play with.
But after I wrote that column, my Munich-based editor friend Jürgen Fey dropped by. He was in the country on a quick trip to SD '88 and other Silicon Valley attractions, and, as he usually does when he comes to California, he found time for a visit. We ate lasagna and drank California wine, swapped stories and the names of some good books, played with the dog and bounced on the trampoline, and finally, around midnight, we sat down and talked transputers.
Jürgen had been able to get his hands on a transputer development system to play with, and had been building a transputer board. He reminded me that, in the time since INMOS had designed the transputer chip to be used for parallel processing, a number of system development projects had been using transputers, proving the chip's practicality. Many of the projects were defense industry jobs, where details can be hard to come by and cost considerations differ from those in commercial markets. Nevertheless, such companies as Sun and Atari are now investing in transputers for commercial applications. Jürgen had been bitten, too. He was eager to get back to Munich to finish his transputer board to show at the CEBIT show in March.
I asked Jürgen if he was doing his development using TDS, the development system supplied by INMOS, and what he thought of it. He said he was, and that it was solid. TDS includes occam, a linker, an editor, a debugger, libraries, and a configurer.
It's the configurer, in part, that would allow Jürgen to develop parallel-processing software on a singletransputer system if he wanted to. The configurer allows you to do your development work using one transputer, simulating a network of transputers in software, and then configure the program for a multipletransputer system. You tell the configurer how many processors you have and where the links are, and that, I gather, is pretty much that.
Jürgen's initial system, though, contains two transputers, and that's because of the debugger. It's called a network debugger, and is particularly interesting, actually requiring a two-transputer system. The target program runs on one transputer, the debugger on the other. Jürgen says it's very powerful.
The folding editor is also interesting. It allows you to collapse detail, much as an outline processor does.
Jürgen then briefed me on the chip. There are three families of transputers now: the 16/32-bit T2xx, the 32-bit T4xx, and the 32-bit T8xx with an FPU. A transputer has four I/O ports called links, which facilitate the development of transputer networks. The occam language supports the links directly via what it calls channels.
Jürgen thinks the transputer is well-designed for parallel processing. In addition to the external parallelism it facilitates, there is a fair amount of parallelism inside the chip. Each of the four transputer links has DMA, and can perform memory accesses in parallel with each of the others and in parallel with the CPU, the FPU (if present), the ALU, and the integer unit.
Because of the transputer architecture and the nice match between the architecture and the occam language, many things you would like to be able to do with parallel processors are easy. Jürgen drew quick sketches showing how you would implement multiplexors and systolic arrays with transputers.
Some things, though, are not so easy. Occam is not a rich language, and C and Pascal programmers will find some of its limitations annoying. Its inability to do mixed-mode arithmetic, for example, is annoying. Its lack of operator precedence is alarming. And together these limitations can produce code that is full of parentheses and explicit type conversions.
Although occam is a high-level language, it permits some assembly-like optimizations. One of the most important lessons Jürgen learned was that indexes must be kept internal to the chip and large arrays external. Since the transputer may have 2K to 4K of internal RAM, this can become an issue.
But when you get beyond the optimization tricks, parallel processing in any language can be a nightmare. Systolic arrays are a simple technique for implementing parallelism, but most of the parallel equivalents of sequential techniques are yet to be discovered. And Jürgen posed the question, how do you document a parallel-processing system for your boss/client? Nassi-Schneiderman diagrams won't work.
One broad class of models that Jürgen thinks may prove fruitful is neural networks. Although the neural net model probably won't fit every parallel-processing problem, it is strictly parallel, and it works. Jürgen's next step will be to investigate neural net models. He thinks there are about 50 of them, and he'd like to get comfortable with at least 10 before he draws any conclusions about their usefulness for his goals.
Part of the appeal of parallel processing for me is that it forces you to jettison so much mental baggage. Jürgen, who also likes to travel light, likened the situation in parallel processing today to that of the Homebrew Computer Club in the 1970s, when hobbyists brought together their wire-wrapped boards and code and swapped ideas while hacking a trail to a new technology.
It's appealing to view parallel processing as a kind of hacker frontier, and that's not altogether wrong; but companies and governments with lots of money to spend have also been investigating parallel processing. Jürgen told of interviewing the head of Japan's ICOT, who talked about the Japanese commitment to research in parallel processing, and about their plan to develop an automatic parallelizer. The program would automatically convert any sequential algorithm to an efficient parallel form. The plan failed; the researchers had to settle for a simpler goal: developing a tool that would interact with a savvy programmer to help him parallelize the algorithm.
Jürgen said he took comfort in that failure.
Although I think it's clever, the above subhead doesn't really fit this closing note. But "then was now and now is then" has been haunting me, and I knew that if I didn't use it soon somewhere in my writing, it would insert itself into my conversation in some even less relevant way, probably making me look like a fool. Looking like a fool in print is something every writer gets used to. I hereby place "then was now and now is then" in the public domain: feel free to use it as you dare. You may even find an appropriate use for it, and then you won't look like a fool.
Five years ago, writing in IEEE Spectrum, Robert Kahn of DARPA gave this projection for computing in the 1990s:
Computer hardware: advanced packaging and interconnection techniques, ultra large-scale integration, parallel architectures, 3-D integrated circuit design, gallium arsenide and Josephson junction technology, optical components.
Computer software: concurrent languages, functional programming, symbolic processing (natural languages, vision, speech recognition, planning).
Computer performance: one giga-instruction per second to one tera-instruction per second.
Kahn was describing the fifth-generation computer technology, which the Japanese began planning for in 1979 and are pursuing with single-minded dedication today. I don't mean to hint that Kahn's predictions make him look like a fool. He may have missed on a couple of points, but some rough beast does seem to be forming out of the materials he inventoried, and the hour for some sort of fifth-generation computer technology's hour seems nearly at hand.
But whither does it slouch? Says Dick Gabriel: "Europe will be pouring six times as much government money into programming as the U.S. in the next decade. I expect the lead in software to move abroad."
Five years ago, it seemed plausible that the next generation of computer technology would be developed first in the United States. Today, based on funding and directness of effort, the most likely developer for fifth-generation computer systems is Japan, followed by a combined European effort, followed by the United States.
I guess it's a good thing we got all that practice reading their manuals when we bought their stereo systems.
On the Paradigms Beat at SD `88
The Oh-Oh Factor
Chatting with Chuck
What the Gods Would Destroy They First Submit to an IEEE Standards Committee
Parallel Tracks
I didn't Say That
How Logical is Prolog?
Never Tell Me the Odds
More Books
Transputer Meditation
The Parts of TDS
Occam's Praiser?
The Homebrew Computer Club vs. Japan, Inc.
Then Was Now and Now Is Then