Departments


We Have Mail


Dear Dr. Plauger:

I am writing in response to P.J. LaBrocca's recent article "Dynamic Two-Dimensional Arrays" (November 1993 issue of The C Users Journal). On page 77 of that article, LaBrocca mentions that his dyn2darray routine suffers from limited portability because of memory-alignment problems. I have found a method of implementing LaBrocca's two-dimensional array allocation without his memory-alignment problem. The method I describe below maintains a crucial advantage of dyn2darray, it still requires only one call to calloc. The key idea is to determine which memory addresses are suitably aligned for storing the objects. The allocated memory is used to store three different data types: pointers to void, unsigned integers, and objects of unknown type. This might necessitate leaving gaps between objects of different types to achieve proper memory alignment.

I start with the following question: How do we know at which addresses we can safely store the objects of the 2-D array?

Let obj_size = sizeof(object type).

Let p be a character pointer to dynamically allocated storage.

C allows us to store a one-dimensional "array" of these objects starting at address p. From this it follows that the addresses

p + k * obj_size (k = 0,1,2,...)
are properly aligned for storing these objects (as long as we don't go beyond the allocated region of memory). For example, if the object has type double, sizeof (double) = 8. (Suppose this to be the case on some machine.) Then

p, p+8, p+16, p+24, ...
are valid addresses at which to store a double.

This example shows how to allocate a 2-D array of doubles so that the pointers and the objects are stored in the same allocated memory region while preserving proper memory-alignment. For the moment I will not worry about the added complication of storing the number of rows and columns. Suppose that we want to allocate a 2-D array of doubles given the following conditions:

sizeof (double *) = 2, rows = 3,
columns = 4, sizeof (double) = 8
Storage for the pointers uses up 14 characters of memory. The first available spot after the pointers at which we can store the doubles is at p+16. This means that a gap of two characters must be left between the pointers and the doubles to ensure proper alignment of the doubles. In general, we would store the objects at the address

p + DynRndUp(SpaceForPointers, obj_size)
where

SpaceForPtrs = rows * sizeof (void *)
and

Dyn2dRndUp(i,j)
is a macro that rounds i up to the nearest multiple of j. (i.e., Dyn2RndUp(14, 8) = 16).

Of course, we still have to consider how to store the number of rows and columns between the pointers and objects. The first address at which we could store these values is given by:

p + SpaceBeforeRowsAndCols
where

SpaceBeforeRowsAndCols = Dyn2dRndUp(SpaceForPtrs, sizeof (unsigned))
The amount of space used by the pointers and the two unsigned values (number of rows and number of columns) is:

SpaceForPtrsRowsAndCols = SpaceBeforeRowsAndCols + 2 * sizeof(unsigned)
Finally we can begin storing the objects at the address

p + SpaceBeforeObjects
where

SpaceBeforeObjects = Dyn2dRndUp(SpaceForPtrsRowsAndCols, obj_size)
If there is a substantial gap between the end of the pointers and the beginning of the objects there may be several locations at which we could store these two unsigned values. To make sure that we can recover these two values (given only the 2-D array) we must store them as close to the objects as possible. An example will clarify the matter. Suppose:

rows = 3, cols = 2,
sizeof (void *) = 2,
sizeof (unsigned) = 4,
obj_size = 22
These sizes were chosen to illustrate the most general case. The pointers use up the first six characters of memory. The first address available to store the number of rows would be p+8. The objects can be stored starting at p+22. The first two columns in Figure 1 show that we have two choices as to where to store the number of rows and columns.

In this example, SpaceBeforeObjects = ((char **) p)[0] - p is 22. If you redo this example setting rows = 5 you will also get 22 (see third column in the diagram above). Therefore, storing the number of rows and columns as close to the pointers as possible makes it impossible to recover them later. The first and third columns in the diagram above makes it is clear that the location of these two values cannot be determined by the amount of space before the objects.

Storing these two values as close to the objects as possible solves this problem. We simply round down the space before the objects to the nearest multiple of sizeof (unsigned). In the example above (columns 2 and 3) the space before the objects (22) is rounded down to the nearest multiple of sizeof (unsigned) (4) to get 20. The values for the number of rows and columns are to be stored immediately before this offset (p+20).

In general, the expression

p + Dyn2dRndDown(SpaceBeforeObjects, sizeof (unsigned)) - 2 * sizeof (unsigned)
where Dyn2dRndDown(i,j) is a macro which rounds i down to the nearest multiple of j, gives us a pointer to the beginning of the row and column data. In the above example we would get

p + Dyn2dRndDown(22,4) - 2 * 4 = p + 12
I provide new versions of dyn2darr.c and dyn2darr.h [available on the monthly code disk — pjp].

Finally, I should mention that one small portability problem still remains. Both my version of the code and LaBrocca's make the assumption that all pointers have the same size and representation. My understanding is that this is much less of a portability issue than is the memory-alignment problem. (I have seen several C books write "portable" code which makes this assumption).

P.S. I hope this proves useful to your readers. I do not have my own email address but I can be reached at the address shown below, or at jessica @ engin.umich.edu.

Steve Coffman
C-TAD Systems, Inc.
Boardwalk Office Center
3025 Boardwalk Drive
Ann Arbor, Michigan 48108
(313)-665-3287

Whew! I think you illustrate neatly why LaBrocca saw fit to sidestep the storage alignment issue. You can also sidestep the problem of different sizes of data pointers by storing only pointers to void. Still, beyond a certain point, the investment in potential portability starts getting hard to justify. — pjp

Hi Bill,

I've just finished reading the November CUJ — very entertaining as always — and I couldn't help notice the inside back page ad:

Sequiter Software Inc. says, "As with C, ANSI C++ is an international standard across all hardware platforms. This means you can port CodeBase++ applications between DOS, Windows, NT, OS/2, Unix, and Macintosh — today."

Sigh! The BSI jumped up and down on a few advertisers over stuff like this in the days before validated C compilers were available. Perhaps someone should have a word with the folks at Sequiter?

See you in San Jose?

Regards,

Sean Corfield
Development Group Manager
Programming Research, England
Sean.Corfield@prl0.co.uk (44) 372-462130

Yeah. People aren't supposed to claim conformance to a standard until it's approved. In the case of C++, it's particularly daring to refer to a putative "international standard." pjp

Greetings,

Enough of the language standards and extensions stuff already. Compare and contrast:

Applications: Sequiter Software's CodeBase 5.0 vs Kedwell's DataBoss

Libaries: Greenleaf's SoftC Database Lib vs Software Science's Topaz

GO BROWNS!

Sincerely,

Noah Hester
nbh@cis.csuohio.edu

Your wishes are noted, except for the part about the Browns. pjp

Dear PJP:

In a letter published in the November 93 CUJ, you mention the problem of using sizeof in preprocessor statements, something most ANSI compilers don't allow. Mr. Plauger offered a solution:

static char junk[sizeof(structname) !=132 ? 0 : 1];
but also offered the caveat that it wastes a byte of storage. I've been using a similar solution for several years that doesn't waste any storage:

typedef struct {
    // ensure sizeof(structname)
    char x[sizeof(structname) == 132];
// is exactly 132 bytes.
} _size_check_structname_;
Because the statement is a typedof, no storage is allocated. The == operator is guaranteed to generate a 0 or 1 result (on an ANSI compiler). Even on a few compilers I've encountered which have an extension which allows a zero-sized array to appear as the last element in a structure, an error message is generated because the size of the structure overall cannot be zero. The error message you get from this construct varies between compilers, but it rarely indicates what the real problem is, so comments in the code are essential. (The fact that the typedef name is _size_check_something_ helps. Using a similar standard naming convention throughout a project is probably a very good idea.)

Other checks are possible using this method. For example, in a project once I had special 16-byte-at-a-time block zero and block move routines for performance. To safely use them on structures, I included the check:

typedef struct {
    // ensure sizeof(memnode)
    char x[(sizeof(memnode) & 0x0F) == 0];
// is a multiple of 16 bytes.
} _size_check_memnode_;
I've encountered several C programmers who hated such compile-time assertions in source (or header) files; perhaps they never make mistakes, or they enjoy long debugging sessions. While it is rare to make a mistake which causes one of these assertions to fail, the hours saved when it happens are worth the minutes it takes to code them.

Ian Lepore
Moderator, BIX c.language conference
ianl@bix.com

I like your solution better than mine. Thanks for telling us about it. pjp

Dear Sir

Since a long time ago, I'm an avid reader of the articles that you write in Dr. Dobbs, C Users Journal, etc. I enjoy every piece, especially with your style of writing.

I just want to say thanks for your great writing, and for being such a great researcher in the computer area (and the like).

Your Friend,

Leo Medellin

                          0
                        _.>/)
leo.medellin@asb.com * (_) \(_)....
leo.medellin%bbs@quake.sylmar.ca.us
ak467@FreeNet.HSC.Colorado.EDU
Thanks. And I like your bicyclist. pjp

Mr. Plauger

Thanks for continuing to produce an interesting magazine. I have been a subscriber for about 5 years. Some points/comments:

1. Articles such as "Code Capsules" by Chuck Allison are useful — we must all remember that new, young C programmers join the ranks and have missed all the useful information contained in the early issues of the Journal.

2. Linux — a Unix System V clone which runs on a PC, costs virtually nothing and includes Emacs, Latex, and X together with the GNU C and C++ compilers — is becoming very popular. Is there any possibility of some coverage for Linux in your magazine?

3. Dr Dobb' s Journal have produced a CD-ROM containing all articles from January 1988 to June 1993 together with text search facilities. Is there any chance that such a product will be produced by R&D as I feel that many subscribers would find this of interest.

Yours sincerely,

David Richards
184 Turf Pit Lane
Moorside
Oldham
OL4 2ND
ENGLAND

(1) I like Chuck's writing too. Glad you appreciate the function it serves in this magazine. (2) I'll happily entertain proposals for articles in Linux. (3) We've been exploring numerous ways to make the material from CUJ more available to our readers, but I don't have an answer for you yet on this topic. pjp

Dear Mr. Plauger,

I really enjoy reading The C Users Journal. There is one glaring omission in the C++ I/O streams library: a reset manipulator to set the stream back to default mode. Any function (except for main) has no knowledge of what flags, fill character, and precision are set for any streams it receives from the caller. To make sure there are no surprises, it has to explicitely set all the flags, the fill character and the precision to the values it needs. Having a reinit manipulator would make this much easier. Now let's take it one step further: what happens when the function returns control back to the caller. The mode of the stream may have changed, and the caller has to re-set everything. What a mess! Of course the called procedure should undo any changes it has made, so it has to save the mode on entry and restore it on exit, adding a couple of lines to every function. It seems obvious that we need save and restore manipulators to do the job. Of course you can implement the reinit, save, and restore manipulators, but this is such a universal need that I don't understand why they're not part of the standard library. Right now everyone who uses I/O manipulators has to reinvent the wheel on their own. Incidentally, C Standard I/O has a clear advantage here because it's modeless.

Sincerely,

Hans Salvisberg
Salvisberg Software & Consulting
Bellevuestr. 18
CH-3095 Berne
SWITZERLAND

The nearest thing to what you want in the current C++ library draft is ios::copyfmt, which lets you copy just the formating information between one ios object and another. pjp

Dear PJP:

I wish to comment on the article "A Revision Control System for MS-DOS", published in the July 1993 issue of The C Users Journal. There are two errors that will cause people a lot of grief. The function print_warning listed on page 48 declares the variable string as a character pointer, but doesn't assign it a value. It is then used in a call to fgets as the buffer location. This will lead to the data fgets reads being written to who knows where, and may cause serious problems. It caused my system to re-boot. The same type of error exists in the function rev_number, listed on page 50.

Another concern I have about the code presented in the article is the lack of checks for unexpected end-of-file conditions. The first thing I put under RCS control, after fixing the above mentioned fault, was the RCS source code. I believe I then used checkout to get a copy of a file, and my system hung. The reason the system hung was that the editor I used to create the source files did not require that the file end with a newline character, so the RCS file did not end with a line containing the delimiter, but with a line containing the } character followed by the delimiter. Since there were no checks for EOF on the input file, the system kept calling fgets to get the next line, and the check for the delimiter always failed.

I also worry about the lack of checks for write failures. It appears that there could be serious problems of writes are attempted and the disk is already full, though I must admit, I have not seen this problem.

J.P. Schoonover
(708) 979-7907

It is always interesting to see what you have to do to code prepared for presentation when you start using it seriously. Or code tested on one system when you move it to another. pjp

Dear PJP:

Over the years I've gotten much useful code and advice from CUJ. However, lately the quality of the published code has decreased significantly.

As an example, consider the last two articles on exceptions CUJ has published. While I do not wish to single out these authors, neither of these packages compiled without significant modification on any popular workstation or PC operating system, nor worked as advertised once compiled. In addition, neither package (on the code disk or as available from the Internet) included any installation instructions. It seems obvious the articles were accepted on the basis of perceived interest and not the portability or functionality of either package.

I think all but the most basic of packages offered should include installation instructions and dependencies. Both exception packages have substantial requirements for non-standard development packages (such as a specific version of gmake). Perhaps a "tools and rules" sidebar containing instructions for building and using the code would solve this problem.

A more significant problem is the poor performance and portability of the code. While I would not expect production or GNU quality code from CUJ, the functionality advertised should be present and hopefully relatively bug free. I was especially disappointed with the most recent package because of the attractiveness of an exception mechanism portable between C and C++. After much work by myself and the author I was able to compile this package but then discovered substantial run-time problems. Test cases were not present and the samples provided with the package would not even compile — due to undefined symbols rather than any obscure portability issue. I commend the author for all the help he gave me but why did CUJ publish a package seemingly without looking at the source or attempting to build it? Good examples of previous high quality CUJ articles include the socket library and generic object packages. Both were simple enough to compile on any operating system offering TCP/IP services or a C++ compiler and are robust enough to have become a standard part of my programming toolbox. Both of the articles I've singled out offered functionality of great usefullness but neither delivered on their promises nor did the articles contribute substantially to the understanding you could gain from a quick reading of any number of C++ or EIFFEL books.

If CUJ is to be a pragmatic magazine for professional programmers and not a fluff publication or academic journal a la Communications, its offerings should set the standard for well-executed, portable code. Professionals as well as beginners could benefit from the example such a publication would set.

C. Justin Seiferth
Phillips Laboratory
(505) 846-0561 (V)
(505) 846-0473 (F)
seiferth@lyra.plk.af.mil

We do indeed make some effort to pick articles that have code which is both useful and reasonably correct. Sadly, we (I) don't always guess right. And we lack the resources to compile and test all the submitted code, or even verify that they are easy to install and run cursory examples. I wish it were not so.

On the other hand, your experience with one particular author is not unique. Often, our readers tell us that authors make extraordinary efforts to assist potential users of their code. I am pleased that our contributors are so willing to follow through on their submissions.

Both this and the preceding letter underscore the essential problem of using other people's code. There is a tremendous variation in robustness, portability, and ease of use. I'm not casting aspersions on the talents of our contributors when I say this what is a good design decision for one person may be an incredibly poor decision for someone else. We can only hope that most of the articles we run are useful to many of our readers much of the time. We'll keep trying. pjp

Dear Mr. Plauger,

In the August 1993 C Users Journal article "Automated Unit Testing," Mr. Meadows lists several guidelines he recommends. The first is "Include all test code inside a main program, that is, inside a #ifdef TESTMAIN block."

Well, that approach just does not work that well. Having developed and maintained several large products over a number of years, I have found it better to have truly independent test stubs. Aside from not having lint complain about multiple mains, having one test program that exercises all the functions in a library is much more useful and compact. In addition, golden output of each test function is easier to manage if it is maintained in files along with the library.

A library (or application) code area is then composed of Source Code, Make files, a regression test script, and golden unit test case input and output files. This is all maintained in the RCS pool along with the source. Although we find it easier to keep the unit tests with the source code, if disk space is a problem they can be maintained separately.

This does violate another guideline of Mr. Meadows. "Do not make the test program dependent on external files." Well, sorry. But any sufficiently large system will have some external files. Libraries will rely on other libraries. And applications will have large external data files. Having some simpler versions of the data files for unit tests is not a bad tradeoff.

We find, as a general rule of thumb, that our library unit test cases are at least as large as the code itself. When one adds in the test stubs, and golden output files, it does build up quickly. Some Application Unit Tests can grow even larger, say 2-5x.

However, the payback is when any developer can go into a library or application, run make test, and in a few minutes see if their changes have affected any previous results. Given that 20% of a 150,000 line software product may be changed during a given release, this pays off very quickly in not introducing unwanted bugs. The costs of this approach are disk space, and the displine to maintain the tests as part of the source and development process.

On a final note, one of our newer tools has been the Purify software from Pure Research. Even in evaluation, the product was able to find several memory leaks and other problems which had gone undiscovered for years. I personally recommend this group of products for any serious software team.

Sincerely,

Richard Vireday
Sr. Software Engineer, Intel
rvireday@pldote.intel.com
(916)351-6105

I've found the approach described by Meadows very useful for smaller projects, and the approach you describe better for larger ones, for the reasons you describe. pjp

Dear Mr. Plauger,

While I can't claim your longevity in data processing, I have been in the industry since 1976. As you have pointed out, there's little in the world of data processing that hasn't been seen before. In particular, there have always been people who believe it is possible to constsruct a perpetual motion machine for software support. Once you prime the pump with an initial license fee, the machine keeps producing answers, bug fixes, and enhancements with no further input. This belief is reinforced by the examples of Word Perfect Corporation and Microsoft, who seem to keep providing support just because they think it's the right thing to do.

Free support is really a variation on the infamous Ponzi scheme; you give me $1,000 and I will pay you $250 interest every month forever. Or, to rephrase, you buy my $125 competitive software upgrade, and I will pay the distributor his cut and provide you with $45 per hour support forever. It's become the case for PC software, including the development tools advertised throughout your magazine, that selling computer software in an extended market requires the vendor to either lie or become a software missionary.

This leaves potential customers only two ethical and legal purchase alternatives: only buy products from vendors who charge enough to cover support, or accept spotty support provided for free. Anyone who steals software should never become a parent, or should have a high tolerance for hypocrisy when their child is found cheating or shoplifting.

Sincerely,

James P. Hoffman
416 West Kerr St.
Salisbury, NC 28144

While I wouldn't use your emotion-charged phraseology, I agree with much of what you say. I ran a software company for a decade and found myself entertaining a different scheme for pricing code and maintenance almost every year. Charge too much and your competitors steal the market. Charge too little and you go broke getting rich. I'm glad I don't run a software company today. pjp

Dear Mr. Plauger:

I want to express my thanks for the three part series CUJ ran on pointers by Chuck Allison. These are the kind of articles that are so helpful to me. Incidentally, they exemplify what is missing from most books on C that Mr. Musielski complained about in his letter in the October issue. But to Chuck Allison's articles: I was raised on the assembly language programming and did nothing else for the first ten years of my programming experience. Consequently, I was well aware of the advantages of indirect addressing, but it has been amazing to me how little this benefited me understanding C's pointer syntax.

I have a library of 38 books on C. Yet it is so often when I run into a problem that I must wade through more than half of them before I discover the key. It is a constant annoyance that most books on C never proceed beyond the simplest example. I will give you a trivial example: look at most books aimed at beginners in C. How many show that curly brackets are necessary with if, for, do, while when more than one statement follows? Trivial, maybe, but not to a beginner. How many warn that scanf is worthless for dealing with user responses that don't meet the programmed format requirements? How many show useful alternatives for interactive user responses?

My first exposure to higher-level language was BASIC. I have exactly three books explaining the language and never needed more. Mike Musielski has a valid grievance and it is just a little unsettling to me that I have had to collect so many books on C despite my admiration for C and appreciation of its features and power.

On the subject of Numerical Extensions to C, about which you wrote in the September issue: My interest in C mainly centers on electronic engineering programming and I watch with great interest the deliberations of the NCEG group. One feature which I have seen no information is whether they are looking at non-integer exponentiation. BASIC allows statements such as 2^1.6 which save a lot of bother working with logarithms. The only text I have found dealing with engineering programming is Numerical Recipes in C. Unfortunately, the authors worked assiduously at a FORTRAN translation and mostly ignored the powerful features of C because no counterpart existed in FORTRAN. The result is that oft-times the sources are not easy to read. I often fall victim to their "unit" approach to arrays where they simulate FORTRAN's elimination of the zero element of an array.

Sincerely,

Forrest Gehrke
75 Crestview Rd.
Mountain Lakes, NJ 07046

There are so many books on C simply because there is a huge market. Everybody wants to write the next Kernighan & Ritchie (still the best selling technical book ever), and nobody wants to leave an entire market to some other potential K&R.

As for your question about exponentiation, it seems to me that the current pow function does what you want. pjp

Mr. Plauger,

With much amusement, I read your article "An Embedded C++ Library" in the October 1993 issue of Embedded Systems Programming. In the past, standards and embedded systems were always separate subjects. They did not benefit from each other. Now that you are an official EPSILON [Embedded Programming Society, International, and the Loyal Order of Nonentities — pjp] author I am glad to see that you want to become part of the solution rather than part of the problem. By way of your first paragraph in the Embedded Wish List section of your article, I see that you understand that the major problems are non-technical. Welcome and glad to have you on board.

I have some experience trying to get the ANSI C standards committee to provide language support for embedded systems. For the most part, the ANSI C committee was a very homogeneous group of compiler writers whose expertise in embedded systems was unhindered by their ignorance of the subject. I tried to make some of your points at one of their meetings. They literally laughed at me. The chair of the subcommittee addressing these issues openly equated embedded systems with toasters. As a group, they were arrogant, rude, and lacked the experience to understand the technical issues except from the standpoint of compiler design. For the most part, they seemed to prefer it that way.

It reminds me of the story of the salesman and the engineer or safari in Africa. The first morning, the salesman took the engineer lion hunting. Shortly after they left the hut, the salesman and the engineer came running back with a lion snapping at their heels. As they reached the hut the salesman opened the door and stepped aside as the engineer and then the lion ran into the hut. Slamming the door shut the salesman bragged, "I caught him, now you skin him."

Instead of dealing with embedded systems issues, the ANSI C committee slammed the door and was done with the problem. It is the embedded systems programmers who are constantly reminded that they are still living with a lion. I hope the C++ salesman knows more about catching lions.

Years ago, at your request, I sent you copy of my public comments on ANSI C by fax. Let me paraphrase a few key recommendations.

Now we get to deal with the C++ standard. I hope history will not repeat itself. I wish you lots of luck, fortitude and the tenacity to get the job done.

Yes, I know your article appeared in another magazine. The other magazine is deficient. It lacks a Mail column. I suppose it has something to do with committing space to advertisers or readers depending upon where your priorities lie. I like The C Users Journal. It has lots of mail with many differing and even critical viewpoints. Since you write for both magazines and because this is a topic with very significant impact on readers of both magazines, please forgive me the sin of mentioning the name of another publication. This is just my way of trying to get broader support for making a better language.

Sincerely,

Russell Hansberry
171 Whitney Road
Quilcene, WA 98376-9629
Telephone: 206 765-4465
Fax: 206 765-4430
Compuserve ID: 70314,1506

I'm really sorry you left that C Standards meeting feeling the way you did. I can assure you that Jim Brodie as Chair would not tolerate such open expressions of disdain as you perceived. (He is personally incapable of being rude to another person, in my experience.) There was expertise on embedded systems within X3J11 and we had discussed any number of such issues in earlier meetings. My article in DDJ expressed my dismay that we chose to provide so little explicit support for embedded programming in Standard C, but it was not intended as a criticism of the committee's decisions.

I agree that the level of standardization you describe would be helpful in many circles. From experience, however, I know how much work it takes to flesh out what you propose. And I know the burden it would impose on most implementors to conform. So I expect that, while some things will be better attended to in the C++ standard for embedded programming, the final result will still fall short of your guidelines. pjp

Dear PJP,

Thanks for the informative article [on what? can someone guess what this refers to and insert its name and publication date? - pjp]. Incidentally, the October issue of Byte carries two excellent articles on similar products, which are more expensive, yet seem to do the same thing: CodeCenter, ObjectCenter, and others (page 28) and the heavily-advertised and consequently high-priced BoundsChecker (page 159). (I suggest, CUJ acquire a copyright and carry those articles in December, for the benefit of all readers.)

Isn't it a shame (for the C language compiler manufacturers) that such products are necessary? I understand that in the early days of C, products like the manifold LINTs were necessary. There was memory either for a compiler or a LINT-type consistency check. But nowadays, our compiler manufacturers go after a fashionable C++ compiler, allowing their C-compiler to gather dust. Can't someone produce a decent C compiler which catches those memory overwrites etc. in a "debug" or "verbose" mode? And then give us decent longs which can be used for business applications also (the "pennies in long doubles problem")? You mentioned in your October editorial that people are demanding a revision of the C standard. You seemed surprised, after having jumped onto the C++ wagon in a hurry. You made me curious. Can you tell us more?

Ref: C++ vs.C

I have learnt that mediocre standards are better than brilliant ideas, which turn the users into guinea-pigs for testing one revision after another. See the PC, which in its days was not the most brilliant, yet... See C nowadays. It is alright for an academician to buy a C++ compiler. But what if you produce a "ton" of software in C++, knowing that you will adapt your code by July 1994, when the new standard arrives hopefully. The guinea-pigs are necessary to fund the development effort, but not everyone can afford... I think the field is maturing. We don't believe anymore that the latest is the best. In a different field, take the software developers (WordPerfect e.g.) who deliberately shun Windows, as their own graphics routines under DOS are superior! (Going by BYTE October 1993.)

Suggestion: Devote a topical issue of CUJ to the C++ vs. C debate, and discuss — for the benefit of your readers — the merits of the one over the other. The basics. Not the style you adopt when you talk to colleagues on the C++ committee. You get my idea. I am particularly interested in the performance payoff for the interpretative elements in C++ ("housekeeping" known from good olde BASIC back in ..., with the deletion of runtime objects). Everyone knows by now that programs consist of data and algorithms. C hinges everything on algorithms, so that data are ubiquitous unless you deliberately implement some "information hiding." C++ sees data only (or mainly) and subordinates algorithms to data. It is the other extreme, certainly more appropriate for big projects, but hardly suitable for the small application and hardly suitable for top-down design. Once you know what you want it is alright to pull the objects together, which you have collected in a bottom-up approach. The shift in paradigm is total...

Organizing code is very much like organizing companies. (I have worked in that field called management consultancy for some 10 years!) In the small company the boss looks after everything. No need for information hiding. Then come the specialized departments. In C++ these are the objects. C++ uses structures to bundle data and algorithms. What if C was (made) a little more aware of subdirectories? (Yes the subdirectories of the operating system.) That is also a way of organizing data. It can be done, even with the current implementations of C, which know about an INCLUDE directory and the source-code directory, at best. The root directory is the boss. It just does a few strategic calls to the subdirectories visible from the root directory (I am talking development time). The visibility of data is restricted to a subdirectory. Between subdirectories there are no side-effects: all data is passed as arguments or returned as return values. I have been organizing my programs like that for some time. With enormous benefits. I plug and play, even without C++ constructors and destructors. The stack is a fabulous automatic constructor/destructor. And there is no need for housekeeping. We are talking about two languages, which unfortunately are "marketed" as successors of each other! What confuses everyone is that in C++ you can program (almost) just as if it were good old C!

You get my message? Give us a decent C, with llongs 64-bits wide (for business applications, accounting, and the like), with decent compilers that can check for memory overwrites in a verbose mode, with some evolution (not revolution) on information hiding, and then move to C++, Smalltalk, when the job is done. We are judging a half-finished product against another half-finished product. That obscures everybody's judgement. And in turbid waters there is good fishing (marketing). Remember: "With our modern marketing techniques we would easily have pushed Beethoven's symphony production into the double-digit bracket".

Sincerely yours,

L. Engbert
Engbert UB
Taunusstrasse 8
61389 Schmitten
Germany
(06084) 2367
FAX +49-6084-2458

As a rip-roaring pragmatist, I applaud that the world has both C and C++ compilers in it. And that more often than not, the two more or less play together. Right now, I view C as the safe and solid bet for delivering serious applications with serious robustness and performance requirements. I view C++ as the cauldron in which ambitious new ideas are being stewed. There's a place in my tool chest for both. That seems to be the case for many of our readers as well. pjp

Dear Mr. Plauger:

In reading your July 1993 issue, I noticed a letter to the editor from Kevin Nickerson asking about distribution of the ANSI C (IS0) Standard. While not in machine-readable form, The Annotated ANSI C Standard, recently published by Osborne/McGraw-HiII, contains the ANSI C Standard on left hand pages with annotations by C programming author Herbert Schildt on right hand pages. This book is now available in bookstores or by calling 1-800-227-0900 at a price of $39.95.

We published the book because we felt that there are thousands of people like Mr. Nickerson who would like to have the Standard, but don't have their own copy. As a special offer to your readers, Osborne/McGraw-Hill will give a ten per cent discount off the cover price to those who call our 800 number and say they saw it in The C User's Journal.

Sincerely,

J. M. Pepper
Editor-in-Chief
Osborne/McGraw-Hill
2600 Tenth Street
Berkeley, CA 94710
510-548-2805
FAX 510-54906693

Here's your chance to get a good price on a useful document. pjp

To the Editor:

The code from the October 1993 issue shown in Listing 1 was compiled with Borland C++ (Chuck Allison's "Code Capsules: Pointers, Part 3"). It does not work with the input

sortargs *.c
as alleged, but does sort command line arguments if all are entered at the command line. It does not work with redirection or using more.

Any Comments?

Yours Sincerely,

James R Lane
13 Waratah St.
Walkerville 3959
Victoria
Australia

Chuck Allison is obviously speaking UNIX shell language here, not DOS command lines. The UNIX shell expands wildcards such as *.c and invokes the command with the argument list spelled out completely. DOS passes on the wildcard as a single argument and leaves it to each command to expand the wildcard as it sees fit (if at all). Redirection has no effect on the interpretation of command-line arguments. pjp

Dear P.J. Plauger:

I have a couple of important comments about "Dynamic Two-Dimensional Arrays," by P.J. LaBrocca in the November '93 issue: VERY! USEFUL!

His dynamic 2-D arrays slipped painlessly into an application that desperately needed them. Exactly the kind of information I read your magazine for.

Jeffrey Siegel
Tokyo, Japan

Thanks. Glad we could be of help. pjp

Dear Mr. Plauger,

This is a response to the letter from Lawrence H. Hardy in CUJ, December 1993. A readily available source of documentation of the PCX file format is Flights of Fantasy, by Christopher Lampton, Waite Group Press, 1993. This book is both fun and informative. It contains C++ source for a flight simulator. It does a good job of explaining C++ classes, VGA graphics, and animation.

Steve Robison

Thanks. pjp

Gentlemen:

I am writing in response to a couple of letters in the December 1993 issue of The C Users Journal about the Hayes AT Command set. An excellent reference is the Technical Reference for Hayes Modem Users, Hayes Microcomputer Products, 1992. This publication is available free to Hayes modem purchasers when requested within 90 days of purchase of a Hayes modem. Others can purchase this publication from Hayes for $25.00. They accept prepayment by check and will take Visa and MasterCard or they will send C.O.D.

Hayes Microcomputer Products, Inc.
Attention: Customer Service
P.O. Box 105203
Atlanta, GA 30348-9904
(404) 441-1617

Sincerely,

James E. Truesdale
Systems Analyst
jbm electronics
4645 LaGuardia
St. Louis, MO 63134-9906
Thanks. pjp