LETTERS

Designing Software Design

Dear DDJ,

It was with great enthusiasm that I read Mr. Kapor's article "A Software Design Manifesto" (DDJ, January 1991). I found it particularly interesting that Mr. Kapor recognizes that software design really crosses many disciplines. I do a lot of research for my designs and I find myself looking for human factors impacted by computers under social sciences, software/hardware interfaces under electrical engineering, data structures, and software architecture under computer science, and systems design under business administration. These disciplines, and others, factor into a well-designed product, yet many are missing in academia. However, I do believe that both the computer industry and colleges are changing their views on how software should be designed, developed, and implemented. Recognizing the difference between design and engineering is the first step. Once colleges and companies begin to produce "software designers" trained to a level which Mr. Kapor has outlined (or some similar level), software will begin to be user friendly and may reach levels of reliability never before dreamed of by even the most imaginative NASA engineer. Until then, we will continue to have software that is marvelously engineered, but most humans will be unable or unwilling to use because it is design deficient.

As a final note, I would like to add a course in "maintainable design" to Mr. Kapor's list of topics to be studied. It is a subject most companies would rather avoid, even though in so doing companies are subjecting themselves to tremendous redesign and reengineering as well as bad feelings among users. In an industry driven not only by its own passion but the passion of its customers as well, we cannot afford bad feelings.

Mike Maitland

Compusol Inc.

Camp Hill, Pennsylvania

Dear DDJ,

Thanks to Mitch Kapor for the article "A Software Design Manifesto," (DDJ, January 1991). I have recently been complaining (to the wrong people, of course) about the abysmal approach the mainstream software manufacturers take to the user interface and other aspects of programming. Mitch Kapor's article put into words much of what I've been feeling: Software design needs to be taken more seriously.

Many of my coworkers are computer illiterate, or are just beginning to learn how to use computers. The typical response I get from them is, "Oh, I don't know what you mean. The program can do all sorts of neat things," or some similar statement. They don't realize that the power hides behind a hideous mask of controls. They believe inexperience is causing their trouble. I like to warn them that experience does not always help. Proper software design could change that.

Having written a handful of small special-purpose programs, I can understand the discipline required to design a program. I am also much more demanding of the programs I use than the normal user. I know better.

I am, unfortunately, a victim of the same thought process when I write code. I spend ten percent of my time designing the program, and 90 percent perfecting the algorithm. I should (and will from now on) spend more time on the design aspect. If I plan during my design process I could likely save considerable time usually spent on modifying the design after the fact.

A software architect! Such a great idea.

John Sandlin

San Antonio, Texas

Dear DDJ,

Thank you for publishing Mitchell Kapor's article entitled "A Software Design Manifesto," in the January 1991 issue. I applaud Mitchell Kapor's comments for their enlightened thought, courage of conviction, accuracy, and timeliness. Issues he has addressed needed public exhibition in order to shake up the personal computer and software industries.

Mitchell and I are part of an imperceptibly small group in the software engineering community that completely comprehends and embraces the "software design viewpoint." Thus, we battle daily against software mediocrity. My background in industrial design and consumer product development management is the foundation for my venture into the software design and engineering area.

Consumer product development focuses on user requirements. Designers learn sensitivity, adopt the user needs as their own, empathize with the user, and become the user in order to understand the user's needs. Thus, state-of-the-art technology is forced to conform to the humanness of the user. Unfortunately, the opposite situation manifests itself in MIS departments, software development companies, and consulting firms.

Today, those persons who relate to the mechanics of programming are given the task of defining and addressing the humanness of the user. Therefore, the user gets the expertise of the analyst/programmer's coded perception of the user's needs. The depressing part of this situation is that neither the user nor the analyst/programmer are aware that the software could be more than it is in its present form. The reasons for this situation are:

First, analysts and programmers lack marketing or scientific research experience; hence, they cannot ask the right questions of the user to get the correct information for development.

Second, they cannot or do not communicate the versatility of the programming language to the user.

Third, analysts and programmers lack time, motion, and methods, and human factors and perceptual analysis experience; thus they do not observe and evaluate the user's activities as they relate to software requirements.

Fourth, I have sensed the following attitude among some software engineering persons and some MIS departments, which is: No one outside of our discipline can contribute much to the software development process.

Fifth, society promotes the following myth: The personal computer and programming are both difficult to learn and mysterious.

However, the user must assume some responsibility for software design failure.

Some users are afraid and are uncomfortable with computers or software development. Today, the PC is important to productivity in the workplace, yet employees still resist it. Typically, the attitude is: Don't teach me more about computers than I need to know in order to get my job done. Because users refuse to learn that little extra about the software, they continue to make the same mistakes, which increases their frustration. If this computer phobia could be overcome, users would make ideal candidates when polling for software ideas.

Sometimes users don't appreciate the value of the data they possess. Since users have only a cursory relationship with the computer, how can they ask for software products that meet their needs?

Mr. Kapor's article supports a philosophy I have advocated for years, before I founded Alexander PC Systems, a software engineering firm. When we begin a software development project, the marketing and user research is completed first. Then a design concept and matrix structure is created. The design concept is user tested and perfected before the coding is started.

We are obligated to overthrow traditional software development methods. The losses from mediocre software must cost millions of dollars each day. Current practices need to be replaced with "user-focused software development," or in Mr. Kapor's words, a "software design viewpoint."

Bryan R. Alexander

Morristown, New Jersey

Whose Got the Secrets?

Dear DDJ,

In his January "Structured Programming" column, Jeff Duntemann pokes at Turbo Pascal's newly introduced "private" data fields and methods in Version 6.0, and brings up the thorny "rose" of keeping programmers handcuffed and blind-folded.

The myth is that this binding and blinding contributes to productivity. This might be true if programmers were a subhuman lot incapable of focusing attention on "a level at hand" without blinders and cuffs. Actually, it's paranoid entrepreneurs who need programmers to write the code that is the wealth they are stealing and who imagine their victims will steal it back if they "see it whole." This "hiding" is simple divide and conquer. The paranoid's bind: If the programmer knows what he's doing, he might go elsewhere to do it. If he doesn't, he might not be able to do it there. Tough.

Actually, "access management" belongs in the operating instructions, not in the compiler directives, where it directs the unthinking compiler to turn out Rube Goldberg code.

Borland's use of Mandelbrot's "self-identity" is aesthetically pleasing. Objects with their miniature "interface/implementation" (public/private) notation are ...cute. But the havoc wreaked upon extensibility isn't an occasional lost feature. It's even knowing what can and cannot be "overridden" because of dependency on some proc near--because the privates are listed, but the dependencies are not. In short, the whole tortuous interlacing of proc nears and proc fars is the problem.

The solution--and it works the same for 5.5 and 6.0--follows from recognizing that access management should be in the operating instructions, not in the compiling process.

Nonparanoids might figure that the programmer whom the object is delivered to has an interest in using it according to its design, and will look at the directions for using it.

We can make those directions a bit deeper than the parameter list. In fact, let's use "private" on the line identifying a field or method, and in comments. A {pvt} by a field means "Look, when I get this for you, I'm going to process it some, so if you grab it yourself you better know what you're doing with it. And if you write in it, you're a purblind idiot."

And the {pvt} by a method says "This is set up for use by other methods; use from outside can screw things up; and even overriding those other methods requires checking for possible special instructions."

So, we can have it both ways. We can have public/private distinctions without hopelessly tangling inter- and intrasegment calling realities, thereby forcing a programmer to tiptoe among imminent system crashes.

Just hold in mind that access management belongs in the operating instructions, not in the compiling. If you figure that you can't trust a programmer to focus on the "level" that he's working on without "hiding" the data and code for all other levels in a locked drawer, you probably shouldn't be hiring programmers in the first place. Or vending code chunks to them. And if a buyer wants to drive his new car around "out of tune," isn't that his or her prerogative?

Crine Outloud

Berkeley, California

WIN Here and There

Dear DDJ,

The WINTHERE program you presented in the January 1991 issue looked very interesting to me. Our company has a device driver which needs to print error messages on occasion, and has trouble with Windows 3. Currently, the user must tell the driver during installation if it will be running under Windows. The problem is that since the device driver is part of DOS, it cannot call DOS routines to print to the screen. All messages are printed instead by the BIOS. Unfortunately, Windows overrides the BIOS, and while the BIOS thinks it has written a message to the screen, it has in fact written to the bit bucket. If we could tell that Windows was running, we would just default to the critical error handler, and let Windows take care of it. The WINTHERE program promised to do just that.

The 4680h call is not exactly what Ben Myers supposed. I had discovered this call by disassembling, just as Ben had done. However, Windows changes the multiplex interrupt during operation. The 4680h call is available within the DOS window, or when a non-Windows applications program is running. (In fact, this is the only time under Windows when the BIOS can write to the screen.) When Windows has the screen in graphics mode, this call is not available, and having gone through that code too, there is no simple call under interrupt 2Fh to check if Windows is installed.

Bill Hawkins

Winter Park, Florida

Ben replies: Thank you for pointing out that the interrupt mux call with ax=4680h does not detect all cases of Windows 3.0 running in real (/R) or standard (/S) mode. I admit that I did not spelunk into Windows with a Windows app that displays real memory locations, though I just got Logitech's Multiscope for DOS and Windows and it will permit me to do so very easily. I did run DEBUG in a window using a PIF that told Windows 3.0 that DEBUG is well-behaved in its use of the screen, which it is.

I am just as frustrated about the situation as you are, particularly since Microsoft has not responded with any bulletproof solutions, badly needed for TSRs, device drivers, disk defraggers, and other software. My interpretation of the lack of response by Microsoft is that Windows 3.0 has a hole in this area, one that will be addressed in a subsequent release, possibly Windows 3.1. Understand that this is my interpretation, since Microsoft is officially moot on the subject. In the meantime, the only recourse is to do as you are doing today--ask the user whether the driver will be running under Windows.

When a graphics app is running, even a well-disciplined old DOS app with an appropriate PIF, Windows intercepts all video calls done with interrupt 10h, and interprets them as it sees fit.

Another possibility might be to use int 21h, function 09h calls to display error messages from within your device driver. DOS maintains two stacks for int 21h calls. One stack is used for functions 00h through 0Ch, inclusive. The other stack is used for the other function calls, including file opens, closes, reads, and writes, which is what I assume your device driver does.

Back To the Future Again

Dear DDJ,

Having just received a diskette in the mail with 1 Mbyte of programs for my HP-48 calculator, I was not expecting to surface for a week or so, but the mailman brought the January issue of Dr. Dobb's today, and so I made the usual exception and pored over the magazine with much haste so as to get back to graphing and calculating with the little pocket machine.

The first thing that caught my attention was Mitch Kapor's article, in which he makes reference to a "modern" programming language (C or Pascal). The article overall is quite good, as is most of the rest of this issue, but Mr. Kapor shoots himself in the foot with his biased and uninformed notion of a modern programming language. I won't bore you with the details, but Bill Gates, Ethan Winer of Crescent Software, and myself will gladly demonstrate to anyone that you can still write programs in Basic that are faster, smaller, more portable, easier to read and analyze, and even more powerful in terms of advanced programming concepts than C or Pascal. Basic is both the People's Language and the preferred language of top professionals like myself (pat, pat), because of its widespread acceptance in the business and scientific community as well as its proliferation in virtually every computer system made, not to mention its inclusion free with practically every personal computer sold today.

The second thing that caught my attention was Jim Warren's article, in which he mentions that the first use of the term "personal computer" was in Rolling Stone magazine in 1974. I have an HP Journal article from May 1974 describing the HP-65 hand-held "personal computer," and I believe that because of its earlier cousins and the institution of the first "personal computer" user group in 1974 based on the HP-65, that it could make a better claim. Persons who were associated with the Homebrew Computer Club in the Bay Area are generally accorded special status in the press when it comes to pronouncements on the origins of personal computing, and their views are almost never contrasted with those of a somewhat different group of users whose association begins in 1967 with the introduction of the HP-9100A machine, now termed a "calculator," even though it had programmability, off-line storage, and a built-i printer.

Personal computing for the masses took a leap in the direction of nonprogrammability in early 1984 with the introduction of the Macintosh and its mouse-driven interface, but for myself and a number of associates it took on a much different tone at the same exact time with the introduction of the HP-71 pocket computer. This tiny little machine had these capabilities way back then: 80-Kbyte ROM-based operating system and language (with peripheral interface), 1 Mbyte RAM of contiguous address space (my machine has 393,000 bytes free over and above the operating system and language), the ability to be controlled by a standard 25 x 80 display terminal and keyboard, to name a few. To input and output information to other computers and devices you would use the Basic commands INPUT and OUTPUT. Ordinary nontechnical programmers like myself could communicate with the world in a way that Macintosh owners could not feasibly do because of the complications and expense involved. For those individuals to whom 80 Kbytes of ROM was not enough, hundreds of operating systems extensions could be loaded into the machine, providing virtually every capability known to mankind.

One last comment about the future in software development: I don't believe we will ever achieve a happy arrangement between developers, designers, users, and programmers if programmers are working primarily in C (this also implies the use of some assembler code), and the designers are having to be schooled in C at least enough so they are familiar with the governing principles of its use. We need a combination of factors to produce better software at reasonable cost, with reasonable performance, and soon enough to meet a reasonable demand. Some of these factors are:

  1. A language that is powerful enough and efficient enough to do the job, but not so cryptic as to be difficult to write and maintain code with. Basic 7.1 by Microsoft comes to mind, and Crescent's PDQ where size is critical. You could think of a computer language as just a tool, and the operating system itself as the more universal language through which various users communicate with each other, much the same as large numbers of people of different nationalities around the world communicate with each other in English today. What you would be missing in this analogy is the prior experience of people like myself who have used a language that is both the language and the operating system of the computer, and therefore quite complementary to one another. It is extremely difficult to do some things in MS-DOS alone that are a snap in Basic, and the reverse is also true. When the two are combined (and as MS-Basic progresses from 7.1, it appears to take on ever more of that flavor), however, programming and productivity Nirvana are as nearly achieved as in possible with today's technology.
  2. Programmers who know how to use the language to its best ability, as compared to the more usual situation where the programmer's skills are divided and correspondingly diluted by their desire to work with many languages and/or operating systems, or many variations of the same language and/or libraries from multiple vendors. My experience with hundreds of programmers from years in Encino, Beverly Hills, Pasadena, Santa Monica, and El Segundo has convinced me that the vast majority of programmers are indifferent to considerations of productivity, and their preference for cryptic and hard-to-use languages is based not on the alleged power that these low-level dialects provide, but rather on the existential pleasures they give to the programmer. It's time we "just say no" to this nonsense.
  3. A true standard library of functions as part of the language, much like the HP Rocky-Mountain Basic or the PC version called "HP-Basic," where commands actually change the collating order from ASCII to some other sequence (EBCDIC, for example) and OUTPUT ... will output data in any format to any device or memory variable without sweat or strain. Trust me when I say that a programmer's ability to produce a lot of useable code is greatly enhanced when he or she can get virtually all the capability they need rather than a melange of material from several vendors.
  4. A commitment from a major supplier to maintain a library of high-level functions over a long period of time, with no runtime royalties, and with a solid user-approved upgrade and revision policy. I am constantly amazed that practically everywhere I go I see people looking for a way to port data (as an example) from .DBF or .WKS files to proprietary systems and back again, where the solution is not readily available to them that will interface to their system. I have developed a set of small (100-line) sub-programs to read from and write to these file types using a fixed-length ASCII format (like .PRN or .SDF) for data interchange, where the SUBs can open, dump header information and data, and close a file in a small fraction of a second on an average AT-class computer, and the code uses only the simplest of Basic commands which are amenable to even the ancient generic compilers and interpreters.
I guess that constitutes my manifesto, and I thank you for your attention.

Dale Thorn

Cleveland, Tennessee


Copyright © 1991, Dr. Dobb's Journal