Dear DDJ,
Thank you for printing long letters. Please continue.
Patrick J. Killips
Whitewater, Wisconsin
Dear DDJ,
Jeff Duntemann's November 1991 "Structured Programming" column once again demonstrates that those who ignore history are doomed to repeat it. In the rush to espouse such fads as GUIs and OOP, it's too easy to forget that we still lack even such amenities as well-constructed runtime libraries and uniform subroutine calling conventions for the traditional languages.
As early as the mid-seventies, the DECsystem-10, under the TOPS-10 operating system, offered uniform calling conventions. Parameters were passed by means of an "argument block," the first entry of which was always the number of parameters being passed. Each actual parameter was tagged with a type code. A program written in Fortran, Cobol, compiling Basic, or assembly language could therefore call a subroutine written in the same or any other of these languages, and could examine the argument count and the type codes to determine how it had been called. "Polymorphism" was therefore quite easy to implement.
In contrast, programmers of today's microcomputers have to worry about such details as the difference between C and Pascal calling conventions and string representations, not to mention big-endian vs. little-endian representation of data.
The runtime libraries for the DEC-10 were equally well designed. Interdependence of library routines was minimized, and each library was prefaced with a table of contents to speed searches by the linker. In contrast, as Jeff observed, today's programmers of microcomputers too often get the whole gorilla when all they want is the banana. I sometimes use Nantucket's Clipper database language even when no database is involved, to take advantage of its ability to build a data entry screen with a minimum of effort. But my enthusiasm for this approach is somewhat dampened by one small disadvantage: Although the smallest possible program, the null program, will "compile" on my machine in one or two seconds, linking the resulting object file requires over 30 seconds, and the .EXE file occupies nearly 160K. That hardly seems a reasonable amount of overhead for a program which does nothing.
It is true of the traditional programming languages, no less than of OOP techniques, that we are unlikely to see much progress towards such desiderata as reusable code and portability until the industry recognizes that conventions for intermodule communication, like those for interprocess communication, belong with the operating system rather than with the individual programming languages. Nor will we see much progress in reducing object code size or increasing programmer productivity until language designers recognize that a well-designed runtime library is as important as the syntax of the language. And if all this were accomplished, might we find that the tools required to realize the putative advantages of OOP have been lying there all along, just waiting for an environment in which to use them?
Arpad Elo, Jr.
St. Johnsbury, Vermont
Dear DDJ,
I feel compelled to take exception to something Jeff Duntemann says in his otherwise excellent and enjoyable October "Structured Programming" column, "Sympathy on the Loss of One of Your Legs."
In the section entitled "Working Fast," after quite rightly stating that C is hopeless for what he calls "lightning development," and going on to say that until you have accumulated a truly tremendous high-level toolkit, Pascal is not much better--something that could well be true, he then goes on to say "Modula-2 is worse than both." This is simply not true. Nor is the reason that he gives for saying so, namely that the tools aren't there and never will be.
In Europe, where working conditions are different and the demand for vertical applications provides a living for numerous software developers, we have members who are quite capable of turning out an intricate business-type application in a week or two. Modula-2 has proved to be an excellent language for lightning development and mad-dash gonzo programming, if only because it's hard to make mistakes, bugs are usually quickly located, and reusable program templates are easily established.
As for tools, I would be happy to ensure that any reader who writes to me receives a copy of a catalog published in Britain which is entirely devoted to Modula-2 tools.
As an example, I would mention that there is at least one application generator, one I use myself, which allows a sophisticated application to be built--most of it interactively--in less than a week, once you are up to speed. Even adding on the cost of a compiler, it costs less than Clarion.
The choice of alternative tools may not be as great as it is with C, but they are there. In any case, with Modula-2 it doesn't take long to write your own tools as you need them!
Finally, I would like to mention that in his June 1991 article, "What's New with Modula-2?" Kim King did not include our organization in the "Modula-2 Resource Guide: User Groups and Publications." So I would like to take this opportunity to correct the omission and supply the missing details: BCS Modula-2 Specialist Group, c/o The Secretary, 131 Carshalton Park Rd., Carshalton, Surrey, SM5 3SJ, United Kingdom.
Euan Hill
Surrey, England
Dear DDJ,
In Ray Duncan's review of How to Solve It: A New Aspect of Mathematical Method ("Programmer's Bookshelf," January 1992) he brings up a most interesting aspect of the mathematical (or any other) method when he raises the question of the phenomenon of subconscious work. This is a topic that has intrigued many. The talented and famous French mathematician, Henri Poincaire, was much intrigued by this subject and wrote several essays on it. I am sure Polya must have been familiar with these and I believe Ray is being unfair when he says, "Polya opts out, however, when faced with one of the topics that most intrigues me, the phenomenon of subconscious work." I do not believe Polya opts out--rather that he believes that he has no more to offer on this subject than Poincaire.
There is, I am certain, extensive modern literature on this subject but I also must admit to being largely ignorant of it. There may well be some relation to the (perhaps simpler) problem of how we pull something out of our memory, though Poincaire certainly would not have made such a connection. In any event, it is indeed an intriguing question. But so is "Why am I me?"
Morton F. Kaplon
Bethlehem, Pennsylvania
Dear DDJ,
I'm writing in regard to the September 1991 Editorial, "Radio Days, or Making Waves on the Airways."
The Federal Communications Commission will soon decide whether to allocate radio spectrum space for data exchanges between computer users. Thus, as a communications lawyer and software designer, it was with great interest that I attended a recent hearing at the Federal Communications Commission involving a new technology called "Personal Communications Systems" (PCS). Although the outcome of this proceeding could shape the direction of personal computing for years to come, I was surprised to find that only Apple Computer presented oral testimony on behalf of the computer industry at the hearing.
At stake in the FCC proceeding is whether or not spectrum will be dedicated for the use of wireless computer networks called "Data-PCS." The proposal filed by Apple (which was supported by IBM in papers filed with the FCC) seeks to use this spectrum to build wireless computer networks of a "local area" nature, about 50 meters in scope. If the proposal is granted, Apple will be able to sell wireless computer LANs right out of the box. Costs associated with Data-PCS, such as relocating present spectrum users, would be added onto the price of the computers at a cost of about $10 per unit. The whole scheme would be essentially unregulated since Apple proposes using a model similar to the FCC's Part 15, which governs potential frequency interference by consumer electrical appliances.
Apple should be applauded for its foresight in building a working relationship with the FCC. However, its views are essentially those of a hardware manufacturer. Data-PCS offers the far greater possibility of creating wide-area wireless networks which would free computer users from the present tyranny of telephonic data communications with its arcane interface and inefficient cost structure. Empowering computer users with a wireless network on a city-wide basis in a free and open manner is possible. Instead of an unregulated Part 15 approach, network managers could be licensed and regulated in accordance with the public interest in the same way that the FCC currently regulates radio and television broadcasters. Such a networking system could result in a greater competitiveness in the next century. At present, the major players in PCS are equipment manufacturers and existing cellular telephone companies that are trying to sell the FCC on the idea of personal telephones. Since we already have a wired telephone network along with a cellular network to provide voice communications, it would appear to be far more in the public interest to establish a new service dedicated to providing personal computer communications.
This FCC proceeding is something we, as computer users and software experts, should become involved in. One of the panelists before the FCC suggested the creation of a committee of users to communicate their needs and wants to the FCC. Should such a committee be formed, computer users and software specialists should become involved in this critical process. It is doubtful that the FCC will ever again offer such an opportunity to the computer community.
Henry E. Crawford
Washington, D.C.
Dear DDJ,
I read the October 1991 "Structured Programming" column on vertical markets and was mentally suggesting the ideal tool as I read. Lo and behold, I turned to page 152 and there it was: Clarion.
I have been a consultant in the industry for 17 years and am fluent in all mainstream languages and database design. I have had my fair share of Assembler, Cobol, C, Basic, Pascal, Modula-2, Actor, etc.--being basically a tinkerer at heart. However, I fell over Clarion a couple of years back and have had a successful love affair with it since that day. More especially so in a "vertical market," where I have developed a package for bailiffs using 65 percent Designer, 35 percent sweat. It didn't all quite happen in your "two or three days" but I would still be trying to decide whether to use Object classes, Windows, TV, or develop it with JPI's Btree toolkit if Clarion hadn't taken me roughly by the arm and thrust me into the land of actually earning $$ for your work.
I have had considerable experience with the package and agree with your findings absolutely!! It is a pity that more people have not seen the light.
I have done quite a bit of "LEM" development in C and Assembler and (being a user of JPI's products as well as Borland's) was thrilled to find out that Release 3 of Clarion (due out in early 1992) will be using JPI's Topspeed environment as its engine.
Given your comments, you will probably agree that the folks at Clarion Software have come up with an awesome package. Tight compact code with smart linking; able to access DOS and Windows DLLs; programming in the language of your choice; preemptive multitasking; be Objective or be Obvious; ability to use a whole bunch of off the shelf products. The mind boggles!
Brent Stock
Melbourne, Australia
Editor's note: Speaking of JPI and Clarion... The two companies recently announced an intent to merge. In addition to the standard business coupling, they intend on integrating JPI's optimizing code generator with Clarion's 4GL language, claiming database applications will benefit from JPI's fast execution and Clarion's small size and ease of maintenance.
Dear DDJ,
In Steve Teale's article, "Proposing a C++ String Class Standard" (October, 1991) he requests feedback on the program interface for a standard String class. OK, here it is.
One of the strong points of C++ is the lessening of the global name space pollution. Class member functions may overload a function name already defined elsewhere in the system without fear of conflict. Which function is actually called is based on the context of its usage (i.e., which types are being operated upon). Unfortunately, the class names themselves occupy the global name space. When you use an intuitive name to define a standard class, you run the risk of collision with user-implemented classes that have already been implemented with the same name. I would like to see a standard prefix on all standard classes so that I could avoid those prefixes for any classes which I define. As an example: CppString, or AnsiString.
Another technique that I and many others are using is to suffix type names with _t, for example, the ANSI system type size_t. This helps considerably with automated text searches using your favorite editor or utilities such as grep. This would suggest that the String class be named something like CppString_t. Yes, it's a mouthful to type, but the effort is worth it in the long run. If you are really lazy, you can define a name substitution macro and call it anything you like, such as String. Perhaps the standard should include both a long form name and an include file that can optionally substitute a standard short form name. (This would defeat the original intent.)
I have found that you need to be very careful when overloading operators. If the use of an operator is not intuitive, it makes the reading of the code very difficult. A typical bad example is the overloading of the * and % operators to mean "dot product" and "cross product" for a vector class. In formulas that involved mixed types, vectors, and scalars, for instance, the reader must spend more time on knowing the types of each variable and mentally translating the action of the operators than on following the flow of the algorithm. A better approach is to implement dot() and cross() member functions.
In Steve's example
String v = "abcd"; v +=1; //result "bdce"
the action of the += operator is not intuitive. In the example
String v = "1234567890"; v << = 1; //result "2345678901"
the << = operator is being used to rotate the string left when the intuitive use would be to shift the string left.
I have also found the assignment operators to be very powerful for increasing efficiency because you don't need to create a temporary variable to hold the results. In the String class the use of += to concatenate to a string is a good example.
Where this breaks down is operations that cannot be intuitively expressed using symbolic operators. For vector classes, if you used the ~ operator to express the computation of the unit vector, it is then easy and consistent to define ~= as the computation of the unit vector in place, which is more efficient. This does not work for operations defined using member functions. If unit vector was implemented as the uvec() member function, there is no direct translation to an assignment member function. A possible implementation would be to create a uvecAssign() member function that returns a reference to the object.
In Steve's proposed standard he suggests the implementation of member functions upper() and lower(). These are implemented as copying the original string and then converting it to upper case or lower case characters, and then returning the copy. This is good and this is necessary, but not always efficient. I would propose two additional member functions: upperAssign() and lowerAssign() to convert the case of the string in place.
My use of the Assign suffix is just an example, and I feel it is a little too verbose, but I would like to see a proposal for a standard naming convention that takes additional assignment functions into account.
Many of the operators were declared as being friends of the String class, as shown in Example 1. Example 2 shows how the first two operators could have been implemented as member operators of class String. I was wondering at the reasoning behind Steve's choice.
friend int operator+
(const String&, const String&)
friend int operator+
(const String&, const char*)
friend in operator+
(const char*, const String&)
int operator+ (const String&) int operator+ (const char*
The National Institute of Health class library (see Data Abstraction and Object-oriented Programming in C++, by Gorlen, Orlow, and Plexico) includes the definition of a SubString class. I have not looked closely at their implementation and possible uses, but if we are going to define a standard string class, and a substring class is useful, then they should be defined at the same time.
I would like to thank Steve for an excellent article and bringing out proposed standards into a public forum.
Carey Brown
Denver, Colorado
Dear DDJ,
I was happy to see the article "C++ for Embedded Systems," by Stuart Phillips and Kevin Rowett in the October 1991 issue. I have been working on a similar embedded project using C for sometime now and was just starting to think about how it might be implemented in C++. Since our system is not currently running under DOS, we share many of the problems mentioned in the article, such as having to modify the startup module, not being able to use any library routines which call DOS, converting EXE files, and so on. The article and the source listings were informative and helpful, with one glaring exception. All of the software relating to the article which I downloaded was in C, not C++.
There are still some major unanswered questions which I have with regards to moving to C++ for embedded software. One thing I would really like to know is how to deal with dynamic memory because any calls to standard library functions like malloc() and free() require DOS. In C we can get around this by using only static memory. In C++, dynamic memory seems to be a necessity since objects are created and destroyed at runtime. I eagerly anticipate any future articles on C++ for embedded systems which might address this question.
Will Knight
Los Gatos, California
Stuart and Kevin respond: Thank you for the idea for a future article! You raise an excellent point regarding C++ and its dependence on dynamic memory allocation. Allocation requests are made whenever objects are created, with corresponding release of memory being made when objects are deleted or fall out of scope. In our article we recommended careful review before using any of the standard library routines, by either inspection of their object code, using the debugger, or purchasing the library source code from Borland.
Borland C++, Version 2.0 allocates dynamic memory from the heap. Heap initialization is performed by the start up code contained in C.ASM and does not require DOS support; we modified the start up code to set the heap for our communications processor. You will need to adapt this for your environment.
Both malloc() and free() use memory from the heap in processing dynamic allocation requests. The C++ operators 'new' and 'delete' call malloc() and free(), respectively. The library versions of malloc() and free() may need to be replaced for embedded system work if there is any possibility of interrupt service routines creating objects or needing to allocate memory. Memory allocation requests generally cause manipulation of linked lists or chained memory blocks. Interrupts that require memory allocation requests should be disabled while the linked lists are searched or altered. Failure to disable interrupts in this scenario will result in memory leaks or worse!
Replacing malloc() and free() is not an overwhelming task; there are many examples of memory management routines in books on C and C++. Many of these routines can be adapted for embedded system use by examining their source code for critical sections which must be protected from interruption. You must exercise caution when deciding in which areas to disable interrupts since the performance of embedded systems is generally dependent on interrupt latency. We'll certainly address this issue in any future articles we write.
Copyright © 1992, Dr. Dobb's Journal