LETTERS

OO OSs

Dear DDJ,

Mike Matchett's letter in the August 1992 issue describing a "futuristic" OOP system was a refreshing eye opener. I've spent the last five years developing a system very similar to the one Mike describes as a commercial venture; and am close to delivering a beta version.

My system is wholly objectized in that there is nothing but objects. The programmer makes no distinction between an object and RAM and one on disk; the system takes care of virtual memory details. I should point out for the record that the original version of Smalltalk as implemented at Xerox PARC was wholly objectized.

I refer to collection objects as "class records." Each class record is an object which is an instance of the class "class record." Variable typing is subordinate to classes; e.g., we have a byte class. Programs are composed of module objects; the executable portion (object) of a module is called an "executable record." Further, we can create and destroy objects and instances thereof during run time. The system is capable of redefining itself during run time. I actually did this when going from the prototype to the beta version.

A class record is, in part, a collection of member descriptors describing the members of the class, including member functions or methods. (The system tends to leave anthropocentric concerns like terminology to humans.) I haven't considered a "qualification-for-membership" function, but it's a good idea. The system does expect each class to provide 13 system service "actors" (more of this strange need for labels); e.g., a printer which provides hardcopies of class instances. Each member is implicitly--hopefully explicitly--an instance of some class. The byte class is composed of eight instances of the bit class. Object C having members A and B can invoke methods x(A) and y(B).

I deal with the "cast in stone" side effect of compilation by getting rid of compilation. Code is reverse assembled in real time, with almost no noticeable delay. Programming languages are implemented as groups of objects called "macro descriptors," one macro descriptor per instruction. Modules are linked dynamically at run time and need not be written using the same language. For still more flexibility, a module may be written using more than one language. A Lisp cdr instruction could be enclosed by a C for(;;) instruction. The language purists among us will have more cause for complaint. Mike raises some excellent possibilities to which I had not given serious thought. Foremost among these is the notion of an interobject language which handles translations among different dialects. I also like the idea of a printer installing its own driver object. Sometimes when you go way out into science fiction, you bump into reality.

Eric Young

Kalamazoo, Michigan

It's in the Numbers

Dear DDJ,

Reader Mike Matchett is quite right to wish for "an operating-system environment wherein everything [is] an object." The full potential of object-oriented technology won't be seen, let alone reached, so long as our OO environments support only static classes defined at compile time. However, Mr. Matchett is mistaken in claiming that to envision such a system entails "going way out into science fiction." It's not science fiction, it's history! This chapter of the Silicon Valley saga deserves to be better known.

Back in the late 1970s, while a couple of guys named Steve were showing off their nifty gadgets at the Home Brew Computer Club, Tymshare was a thriving high-tech venture headquartered in Cupertino, California. Tymshare had a concern: The operating system it used to provide time-sharing on IBM mainframes via its Tymnet network was VM/370. At that time there was serious doubt as to whether IBM would go on supporting VM, since it competed with Big Blue's MVS flagship. So Tymshare put a small group of their best systems gurus to work developing a VM replacement. These people dubbed their creation the Great New Operating System In the Sky, or GNOSIS for short. It was not only object oriented, but it had a microkernel (long before Mach), was capability based (and hence much more secure than any commercial OS), and featured single-level storage with mirrored disks and built-in journaling. In benchmarks, it processed transactions faster than CICS (IBM's standard TP monitor) on the same 370-architecture hardware.

No special programming languages were needed to develop OO applications under GNOSIS. Tymshare used unmodified IBM program-product compilers for 370 assembly language and PL/I, mainly because both had macro facilities. Just a few macros extended PL/I to support what we would nowadays call the GNOSIS API. In the summer of 1984, Tymshare hired a few people, myself included, to test the commercial viability of GNOSIS. Management wanted to know whether experienced procedural programmers unacquainted with OO concepts could be productive after three or four weeks of training on GNOSIS. As the saying goes, "the operation was a success but the patient died." The training went well, several applications were built and tested in record time, and....

Shortly thereafter, McDonnell Douglas bought Tymshare, primarily to acquire the thriving Tymnet business, and sent teams from St. Louis to Cupertino to find out what else came with the package. After considerable benchmarking, pondering and negotiating, MDC management determined that GNOSIS was "not strategic" for their view of the future. The GNOSIS developers arranged to get laid off, took their severance pay as earnest money, got venture capital backing and started up Key Logic. Key Logic sold a system called KeyKos: GNOSIS by another name.

Since then, McDonnell Douglas has pulled out of information services and gone back to making airplanes. Key Logic stayed in business until late last year, when it finally folded. The mainframe world had no interest in a better (but different) OS, and Key Logic's resources were insufficient either to support the missionary work needed to arouse such interest or to provide credibility for long-term vendor support. This is the Catch 22 which keeps small startups out of mature markets.

One of the last feats Key Logic performed before shutting its doors was to recast KeyKos as a "nanokernel" running on RISC hardware. Above the nanokernel ran an implementation of UNIX. Among other things, this allowed the computer to be powered off while UNIX was chugging along with users updating files, editing documents, etc. in full confidence that, once power was restored and the machine was rebooted, UNIX would wake up and carry on from where it left off, apart from a moment or two of amnesia. In April 1992, Alan Bomberger of Key Logic (now at Amdahl Corp.) presented a paper on this at a USENIX workshop on "Microkernel and Other Kernel Architectures" in Seattle. Norman Hardy, a senior architect of GNOSIS (and of Tymnet itself), published a paper on the KeyKos architecture in the September 1985 issue of the ACM's Operating Systems Review.

There have been other proposals and designs for OO operating systems, but I don't know of any that have gone as far as GNOSIS. Even leading academic authorities seem to be unaware that such a thing has been done, not on the scale of a laboratory proof-of-concept but as an industrial-strength implementation on commercial hardware. As a case in point, let me quote computer pioneer Maurice V. Wilkes on "Computer Security in the Business World" in the "Computing Perspectives" column of Communications of the ACM, April 1990 (Volume 33, Number 4). Dr. Wilkes wrote, "Much hope [for improved computer security] was later based on the use of capabilities, or tickets, the mere possession of which gives the right to make use of some resource.... Some experimental systems were demonstrated in which the capabilities were implemented in software, although it should have been clear from the beginning that such systems could not, for performance reasons, be of more than theoretical interest.... The final conclusion must be that...the capability model...is of no use to us since efficient implementation is not possible."

How could an ACM Turing Award winner reach this totally mistaken conclusion six years after benchmarks showed that the capability model, properly implemented, outperformed standard IBM software on the same platform, and five years after Hardy published his description of the architecture which enabled this performance? One answer is that the benchmarks were confidential, and the Operating Systems Review isn't as widely read as DDJ. Another is that capabilities need to be implemented in a small, trusted kernel in conjunction with interobject communication in order not to impose a performance penalty. The studies on which Dr. Wilkes based his statements show only that capabilities don't integrate well into conventional OS architectures.

Edward Syrett

Menlo Park, California

Dear DDJ,

I'm writing regarding "Numerical Extensions to C" by Robert Jervis in the August 1992 DDJ, but find the need to ramble on about some other things as well.

I began reading Dr. Dobb's around 1978, just after the IMSAI 8080 was in production (I still have one in my closet), but before the Cromemco-Z8 had been announced. My company started buying them with a little 4x4 inch monochrome screen that at the time was readable without a magnifying glass. No longer did 4000 people have to wait minutes for a response from the two Univac 1108s after pressing the Return key. DDJ was the hottest magazine around. It contained the latest and greatest technical tips available.

Then came IBM. All of a sudden, IBM PCs proliferated on every desktop at the laboratory where I work. They began to replace the Texas Instruments terminals (TI Silent 700s) and Daisy Writer KSRs (keyboard send/receive units). In spite of its Small-C compiler articles, Dr. Dobb's seemed to be lagging behind, and I let my subscription lapse.

Several years later, a friend at work mentioned an article in Dr. Dobb's Journal in response to a question I asked him. To be truthful, I didn't realize that DDJ still existed. Happily, and much to my surprise it did, and I found that the articles in it were still current and as pertinent as they always were. BYTE magazine has turned to trash, and PC Magazine is becoming questionable, but DDJ still addresses specific issues as well as broader, almost philosophic concerns (and I don't have to contend with as much garbage advertising and "blown in" trash mail as in most magazines). Embedded Systems Programming is the only other computer magazine I read regularly.

Now back to the reason I started to write this letter: I've been exposed to most computer languages and have written code in many of them, including various assembly languages. Around 1980, I discovered the C programming language. It was love at first sight. Not only was there a high-level language that made programming easier, but it also made debugging easier. Whenever I wrote a statement in C code, I could visualize the machine code that would be generated. It is an elegant shorthand for writing programs that machines can execute. There was a direct correspondence between C operators and the instruction set of most machines that executed them, and the operators were easily accessible, generally with one or two keystrokes. One of my pet peeves with Pascal, in addition to its wordiness (Ada is even worse) was not being able to shift left or right. Computers are much faster at shifting than they are at multiplying, yet whenever I had to code something like x:=x*2, the compiler would invariably generate a multiply instruction rather than the shift that I wanted, as in x<<=2.

The relevance to this and the proposed extensions to the C language is that the spirit of the original language should be preserved. I believe C was intended to be not only portable across machines, but upwardly compatible with new machines, not downwardly compatible with older ones. This is probably why the size of an integer wasn't made part of the language by K&R. Instead, the reader was cautioned that "int will normally reflect the most 'natural' size for a particular machine," and that "all you should count on is that short is no longer than long." A program that abided by these rules in the '80s runs on any machine today. I've found myself declaring index variables as unsigned char or short just because I know they wouldn't exceed 255 or 32,767, when in fact it makes no difference to the computer. On a ma hine with a 32-bit data bus (or 64, 128, etc.), it takes just as much time to add 1 to a byte as it does to add 1 to an integer.

Probably one major concern of those considering extensions to the C language is to minimize the addition of new reserved words because of the fear that someone, somewhere, may have written a program that used that word as part of their private program. Terseness is certainly one of C's desirable features; however, if the language is to be extended to an entirely new class of machines (massively parallel, with multiprecision complex floating-point arithmetic and extended character sets), a few extra reserved words (and perhaps operators, if any are left) will have to be added. I would rather see this happen than to be forced to use some ill thought-out reincarnation of Cobol (i.e., Ada). Even APL would be preferable. One last thought: At present, I'm forced to program a 69R000 CPU (UTMC) in its minimalist RISC assembly language (load, operate, store) because it's the most radiation-hardened CPU there is. The original Fortran (yuck) algorithm used double-precision complex floating-point arithmetic on multidimensional matrices (8 x 3 x 3). I would give my left foot to have any kind of C compiler for this machine. While everyone is debating the direction of programming languages for the next century, try to keep in mind that there are those of us who are less privileged.

Ron Dotson

La Crescenta, California


Copyright © 1992, Dr. Dobb's Journal