Departments


We Have Mail


Missing Listing

In his article, "Intuitive Access to Bit Arrays," CUJ, February 1994, Siegfried Heintze provided a code listing, BitArr.h, which contained the line:

#include "Boolean.h"
However, the listing for Boolean.h was not provided in the article or on the monthly code disk. The contents of Boolean.h are shown as follows:

#ifndef_BOOLEAN_H
#define_BOOLEAN_H
typedef int Boolean;
const Boolean TRUE = 1;
const Boolean FALSE = 0;
#endif
This file is also available on this month's code disk.

Dear Mr Plauger,

I have been buying CUJ for a couple of years now and some issues have been useful to me. Others would have been extremely useful had the code been portable.

An issue in the latter category is that for September 1993. I was delighted to see that it covered Windows programming. However, after spending a weekend working through it, I have finished up with only one thing I can use, the code for accessing Windows Help. Even then, I had some trouble with it as I am a novice on the subject of Windows. For some strange reason, I found I could not get the program to create a window with:

wc.lpszClassName = "HELPDEMO";
I finished up declaring the program name as follows:

char ProgName[] = "Help demo program";
and then

wc.lpszClassName = ProgName;
worked just fine.

The other point about this program is that I could only get Mercury to come up underlined in green on the table of contents. Is this how it should be?

Getting back to the subject of portability, I use Turbo C/C++ at home and Borland C/C++ at work. I have no access to Microsoft C/C++ and so got nowhere with the article by David Singleton on cout and cerr for Windows, mainly because it appears to depend very heavily on Microsoft-specific header files and techniques.

I almost succeeded with porting the code in Philip Joslin's article entitled "Using the Windows DIB Color Table." However, I could only guess at values for the various PAL constants and failed miserably in attempting to assign the palette value to the bmiColors structure near the end of the last listing. I tried the following statements:

if (options & PAL_IMAGEINVERT)
   dispRec->bmi->bmiColors[i] = (RGBQUAD)e;
else
   dispRec->bmi->bmiColors[255-i] = (RGBQUAD)e;
But the cast would not work. My thoughts were that, in programs like this, the author could perhaps cater for both the Microsoft and the Borland products by using some preprocessor statements, e.g.:

#ifdef TURBO
    ...
#else
    ...
#endif
This would certainly help to maximise the use readers make of the published code.

In my efforts to learn Windows programming, I have discovered the interesting fact that Borland seems to treat any file starting with cO as a library file, which causes all kinds of strange errors in a regular program starting this way. I would have thought that, if this is indeed their policy, they could at least have drawn it to the attention of developers. Messrs. Chris H. Pappas and William H. Murray have a book out on Windows Programming, supposedly portable between Microsoft C/C++ and Borland C/C++, which completely overlooks this point so that all the code for chapters 1 to 9 starts with the prefix cO.

Maurice Arnold
11 Fulham Close
Hampton Park 3976
Melbourne
Victoria
Australia

Keith Bugg replies:

1. The code was set up for Turbo C++ for Windows. If you is using a different compiler, you are most likely experiencing compatibility problems. Check this first.

2. In the sample program, the planets Mercury, Venus, Earth, and Mars should ALL be hypertexted (i.e., underlined & in green), not just Mercury. The most likely cause here is: 2.1 Missing token of {\uldb}. Each entry in the .RTF file for these planets should read: {\uldb Venus}, etc. 2.2 Failure to #define GO_VENUS, GO_EARTH, etc. Check the error file. It is very easy to get the braces in the wrong place — depending on where they are, the compiler may or may not give a helpful message...

Lastly, I would be glad to look at your code. If you can email me the source code, I'll try to help you straighten it out. The Windows Help compiler is very mute, and unforgiving, about errors in the .RTF file. You are welcome to contact me on CompuServe at 70312,3612, or by writing to me at 122 E. Morningside Drive, Oak Ridge, TN 37830. — Keith

P.S. I'm pretty sure 1 or 2, or both, is the problem. Other readers have contacted me & the only problem they had was finding Listing 7!! When "done right," the program does work as advertised.

David Singleton replies:

Yes, you are quite right, the code I published is non-portable in that it does depend on Microsoft's Foundation Classes (MFC). If you do not have MFC, the cour/cerr programs will not work "as is." I deliberately designed the program to use the MFC to get and to demonstrate the productivity leverage that comes from using a set of library classes.

You may wish to note that MFC is also distributed with the Symantec C++ system. Had I been a Borland user as you are, I would have used what I could from the Borland OWL class library. By your definition, the code would have been equally non-portable, but you would have been able to use it.

I think that I will stop here as I feel that I am in danger of starting a "religious argument" about the meaning and definition of portability. Nevertheless, I would be delighted to give you any help that I reasonably can. You may contact me via CompuServe at 100265.3612 or by writing to me at 30 The Albany, Sunset Avenue, Woodford Green, Essex IG8 0TJ, ENGLAND.

Best wishes,

David Singleton.

Philip Joslin replies:

It would appear from Mr. Arnold's letter that he is trying to cast apples into oranges. The intent of the code given in my article was to specifically use palette entries. The "bmiColors" array in the BITMAPINFO structure referenced by his display record corresponds to palette indices and not to real 24-bit color values (which the RGBQUADs do). The code presented in the article is geared solely towards palette indices. For example, when the SetDIBitsToDevice is called, the last parameter is DIB_PAL_COLORS and not DIB_RGB_COLORS. This indicates to GDI that the "color values" found within the color table (bmiColors) of the DIB are palette indices and not RGBQUADs. This is born out in the SDK development reference manuals in their description of the BITMAPINFO structure:

"...for functions that use DIBs, the bmiColors member can be an array of 16-bit unsigned integers that specify an index into the currently realized logical palette instead of the explicit RGB values. In this case, an application using the bitmap must call DIB functions with the wUsage parameter set to DIB_PAL_COLORS." (Programmer's Reference, Vol. 3, MS-Windows SDK)

If the compiler were to allow RGBQUADs (which are in actuality DWORD values) to be cast to the color table integer entries, their values, whatever they may be, would be assumed by GDI to be indices. I would think that some pretty odd happenings might result.

Best wishes,

Philip Joslin

Salutations:

CUJ published an article on Japanese character encoding several years ago that left me hungry. I now have the opportunity to research the character sets directly, and I thought I would share my results.

In the following, my romanizations use the non-standard expedient of repeating vowels. Thus, "oo" and "ou" are both two syllables of the long "o" sound. Also, I apologize in advance to purists whom I will offend with my casual misuse of grammatical terms.

There is a JIS standard 8-bit character set including the 7-bit ASCII set and the katakana characters. (JIS X 0201-1975 and JIS X 0209-1276.) It varies from the ASCII in position 0x5C [backslash], where it has the international yen symbol, x. There are four other substitutions that may not be strictly observed in the pasokon (PC) universe: 0x27 [apostrophe], 0x60 [grave], 0x7c [vertical bar], and 0x7f [delete]; similar characters, no curves or gaps. The missing characters are available in the 16-bit JIS standard character sets.

The x character seems to be used instead of backslash, both in C compilers and in Japanese MS-DOS. I have not had a chance to find out whether DOS or C compilers would accept the 16-bit backslash.

The katakana 8-bit extension includes the following punctuation:

0xal is kuten, the Japanese period;

0xa2 and 0xa3 are the left and right, quoting half brackets;

0xa4 is tooten, the Japanese comma;

0xa5 is the special word separator;

0xb0 is chouon, the sound lengthener;

0xde is dakuten, the voiced consonant indicator;

0xdf is handakuten, the plosive consonant indicator.

The particle "wo" is 0xa6. The subscript modifiers run from 0xa7 to 0xaf:

a, 0xa7;     i, 0xa8;   u, 0xa9;   e, 0xaa;
o, 0xab;    ya, 0xac;  yu, 0xad;  yo, 0xae;
tsu, 0xaf;
The regular syllables, except for "wo," run from 0xbl to 0xdd

a, 0xb1;   i, 0xb2;   7, 0xb3;   e, 0xb4;
o, 0xb5;   ka, 0xb6;  ki, 0xb7;  ku, 0xb8;
ke, 0xb9;  kp, 0xba;  sa, 0xbb;  si, 0xbc;
su, 0xbd;  se, 0xbe;  so, 0xbf;  ta, 0xc0;
ti, 0xc1;  tu, 0xc2;  te, 0xc3;  to, 0xc4;
na, 0xc5;  ni, 0xc6;  nu, 0xc7;  ne, 0xc8;
no, 0xc9;  ha, 0xca;  hi, 0xcb;  hu, 0xcc;
he, 0xcd;  hp, 0xce;  ma, 0xcf;  mi, 0xd0;
mu, 0xd1;  me, 0xd2;  mo, 0xd3;  ya, 0xd4;
yu, 0xd5;  yo, 0xd6;  ra, 0xd7;  ri, 0xd8;
ru, 0xd9;  re, 0xda;  ro, 0xdb;  wa, 0xdc;
w0, 0xa6;  'n, 0xdd.
(si for shi; ti for chi; tu for tsu; hu for fu; 'n for
the nasal.)
Most of the PC-class machines seem to use an eight-bit character set as their boot-time or internal font, and several variations exist in the undefined areas. The PC-9800 series (NEC's market-leading 80x86 line) has graphics characters, a backslash, and the kanji for time and for yen in the gaps.

Sharp's X68000 series (doing surprisingly well) fills the gaps with hiragana, the cursive style of syllabic character. Backslash, tile, and the normal vertical bar are at 0x80, 0x81, and 0x82. The hiragana follow from 0x86 to 0x9f, replicating the katakana pattern's first two rows including chouon; and then from 0xe0 to 0xfd, replicating the katakana's last two rows except for dakuten and handakuten.

My primary references for the foregoing were:

Pasokon Yougojiten ISBN 4-87408-550-4, a PC jargon dictionary.

Waapuro Pasokon Kanji Jiten ISBN 4-7916-0358-3, a JIS Kanji code map/reference dictionary for word processing.

X68000 Tekunikaru Deetabuuku ISBN 4-87148-426-2. X68000 technical reference.

Shinban PC-9800 Shiriizu Tekunikaru Deetabukku ISBN 4-7561-0434-7, PC9800 series technical reference.

PS: I had intended to make a full article out of this information, including a katakana character bit map and a rudimentary text editor with a romaji-to-katakana input filter. Unfortunately, my computer is state-side and I am here before I had a chance to put the article together. I think the foregoing is the essence of that article. When I can afford to buy a Powerbook, or even when I can get my wife's Wapuro to recognize its floppy drive, I might take a stab at explaining the 16-bit codes.

Joel Matthew Rees
c/o Takeshi Kusuda
1-21-25 Kaminoshima-cho
Amagasaki-shi, Hyogo-ken
661 Japan

I have long been grateful that I am a native English speaker, so I've never suffered the problems of learning English as a second language. Your letter reminds me why I'm glad I never had to learn all the forms of Japanese writing as well. (It also underscores the importance of supporting this rich lexicography in modern programming languages.) — pjp

Dear Mr. Plauger,

Your recent column regarding the clarification of some issues of the C standard has sparked me to ask some questions that have nagged me for a few years. I apologize for being lazy, as I do not have a copy of the standard available, and even if I did, I usually find it a bit daunting. (The mind boggles at the joining of the jargon-ridden computer industry and infinite-length-sentence legal-ese!)

Both questions relate to the use of the void * datatype. Examples are easiest for me, as I am not confident with the language terminology.

1. The function qsort accepts a comparison routine as an argument, which for this example, I will assume is strcmp. The prototypes I have seen are often something like:

   void qsort( void   *base,
             size_t element_count,
             size_t element_size,
             int    (*compare)(const void *,
                    const void *) );
   int strcmp(const char *s1, const char *s2);
Note: I am not sure what the standard is for these two functions, but, in any case, this example will suffice for my question. Lets now assume that somewhere strcmp is passed to qsort. When I enable the -vindictive option on my compiler (i.e. maximum checking against the standard) I receive errors to the effect that the datatype of the last argument to qsort does not match the prototype. I accept that this is true, but I have often wondered why the special rules regarding void *do not apply in this case. Does the Standard define the action in this case? Or is my compiler simply taking the lazy option in an obscure part of the syntax?

2. For reasons too boring to go into here, my current project uses a wrapper over the malloc group of routines. The allocator function of this group has a prototype that looks like this:

   typedef int   STATUS;
      /* just for your info */
   
   extern STATUS MEM_Allocate(size_t amount,
                          void **pointer);
The idea is that the allocation will return a pointer to any arbitrary object, with the return-value indicating what happened. An example calling sequence would then be:

   char   *s;
   STATUS   sts;
   
   sts = MEM_Allocate(100, &s);
When I do this I get errors about incorrect datatype for the last argument of MEM_Allocate. As with the last question, this is true. But, again, why don't the special rules for void * come into play?

My interest here is purely academic — I have overcome both problems by creating the appropriate temporaries or type-casting. But I also suspect that there are good reasons for the errors, and hence I may just be creating problems for the future by "turning-off'" the errors.

The only reasonable answer I can come up with is that these errors relate to the fact that I may be asking the language to do conversions between void * and a SOMETYPE * across compilation units. But I am not sure, as I would have thought the compiler would generate the correct conversion logic using generated temporaries. I further assume that I am getting away with my type-casting, etc. because on all the machines I use, all pointers are the same size regardless of the object that they point to.

What am I missing here? Thanks for any assistance.

David Brown
dcb@atb.ch

Both issues you raise have been discussed within X3J11 and WG14. Both are intentional limitations to the extent to which C will automatically convert between a pointer to char and a pointer to void. You've done the right thing by adding type casts to reassure the translator that you know what you're doing. The code happens to be portable because of the requirement that a pointer to character type have the same representation as a pointer to void. — pjp

P.J. Plauger:

When CUJ published David Burki's article, "Date Conversions" (Feb. 1993, pages 29-34), I confess that I was mildly surprised that the Journal would bother with such material. Surely, I thought, everyone has already written his own date manipulation routines by now? As each succeeding issue appeared, and more letters about date manipulations appeared with them, I began to understand that I was at least half wrong. It seems that everyone is writing his own routines. That is the problem.

When Joao de Magalhaes's letter appeared in the January 1994 issue and succinctly described this situation as a problem, I decided that the time finally had come to speak up.

Over the years, I have researched and written a group of date/time routines which cover all of the basics — day of week, interconversions among Julian Day Number (JD#), Julian calendar and Gregorian calendar, days between dates, days until/since a date, date N days after/before a date, etc. These routines go farther afield to cover Gregorian Easter, Julian Easter, epact, lunar phase, and so on. There are some esoterica, such as a set of functions for manipulating Mayan calendar dates, and basic astrology/astronomy functions. Still another group covers the DOS date/time area — packing and unpacking DOS file date/times, getting and setting system date and time, and so, on. Too many! And still not enough.

These routines are already documented, because I run my own shop and insist upon that. They work as an integrated set, at least for our purposes. And they are correct, as far as our testing has been able to determine. Programmers who have spent time to investigate this area are aware that a fair number of the published routines are limited in scope, easy to misapply, or are just plain wrong.

So much for foreground activity. In the background, I have been toying for about a year and a half with the notion of cobbling this material together and publishing it as a book. There are on the order of a hundred functions, and they fill most of a three-ring binder. The general gist of the book would be a "Standard C date/time library," and the general flavor would be along the lines of Louis Baker's C Tools — introduction, discussion, source code, test bed, example code, commentary and bibliography (where relevant) for each function. That's not terribly different from your Standard C Library format, for that matter.

So, this is do-able. And there is apparently a need. A lot of obvious questions remain, however:

1. Is this material, in general, marketable in book form?

2. What application areas should be covered?

3. How can I go about getting some corrective feedback before going to print? We have standardized on type double for JD#, for example, and fold dates and times together as single JD#s. We like this scheme, but aren't married to it. If the rest of the world hates it, I'm willing to consider cogent arguments for change. My concern is not to rush to print, but to produce a book that is generally useful and integrated. Above all, I would appreciate your thoughts on this subject, particularly in the areas of setting useful standards, portability, and setting up a procedure for reviewing the code outside the confines of our own shop.

If you think that this idea warrants further work, I can supply relevant header files and some explanatory write-up on the next pass. Several of the older functions are still passing structures by value (I wonder what idiot coded that...?), and are part of a maintenance cycle that is going on right now. Real work permitting, that should be complete soon.

Sincerely,

Lance Latham

I would not expect a huge market for a book as specialized as you describe, but I certainly see reasonable interest out there. As you observe, time and date calculations are a never-ending source of complexity. If you can provide solutions that are reasonably solid and packaged in a useful form, as you believe you've done, there will be plenty of people eager to read what you write.

Your next step should be to produce an outline (or table of contents), a preface, and a sample chapter or two. Then you can shop for a publisher. A good one will provide you with editorial guidance in whipping your book into shape and making your code most presentable. Good luck. — pjp

Dear Mr. Plauger:

Recently, I was working on a C++ program and I needed to write data to and from a file, but keep the input and output classes separate. I remembered reading something about this before, so I checked all my magazines and found the reference on page 109 of the July 1993 C Users Journal.

According to the article, "To open a file for both input and output in a future conforming implementation, you will have to replace..." the fstream declaration with the following:

int mode = ios::bin | ios::trunc | ios::out | ios::in;
ofstream f("recs.dat", mode);
ifstream g(f.rdbuf(), ios::bin);
I tried the idea in the program I was working on, but was unable to get it to compile. The following is one of the code samples I used for testing.

#include <stdio.h)
#include <fstream.h>

main()
{
   char temp[255];
   int mode = ios::binary | ios::out | ios::in;
   ofstream f("test.fil", mode);
   ifstream g(f.rdbuf(), ios::binary);
   
   f << "test\n" << flush;
   g.seekg(OL);
   g >> temp;
   cout << temp;
   return 0;
}
After some trial and error, I came up with an alternative that seems to work fine on both my Watcom C/C++ compiler and the Borland compiler I tested it on. The line containing ifstream in the sample program needed to be replaced by the following in order to execute properly:

ifstream g(f.rdbuf()->fd());
Also in order to read what has been previously written to the file, you need to be sure to flush the output buffer to disk after you write to the file.

I thought this information might be as useful to your readers as it was to me.

Thanks for a great magazine. Looking forward to future thought-provoking issues.

Sincerely,

Laura Michaels
Intercomp
P.O. Box 6514
Delray Beach, FL 33484

I'm not surprised. This is an area of iostreams where you can expect considerable change as the C++ Standard comes into wider use. Thanks for telling others how to do the job with today's technology. — pjp

We Have Mail

The letter of Mr. Joao C. de Magalhaes published in the January 1994 issue contains a number of errors and/or misinterpretations of matters related to calendars and astronomical time. Time is a complicated subject, so, as an astronomer, I am never surprised when people have trouble with it.

First, the work he cites, Seidelmann, P. K. (ed.), Explanatory Supplement to the Astronomical Almanac, University Science Books, Mill Valley 1992 (ISBN 0-935702-68-7), is the current authority on these matters. This book is a model of clear, precise definition and discussion of calendars both civil and ecclesiastic, time, time scales, and astronomical coordinate systems and frames. I can recommend it highly to anybody with an interest in these subjects at any level from casual curiosity up. Further, it contains a wealth of validated, codeable algorithms and procedures for time-related calculations and conversions.

Mr. de Magalhaes' discussions of Julian date (JD), modified Julian date (MJD), and the epoch J2000 are not quite right. To begin with a bit of background, the system of Julian dates was established by astronomers to permit the unambiguous definition of the date and time of an astronomical observation, measured as days and decimal fraction after JD 0.0, which is Greenwich noon, 1 January 4713 BC, Julian proleptic calendar. This date was chosen to be well before any known recorded astronomical observations so that all Julian dates would be positive. For precise work, the time scale (such as Universal Time (UT) or dynamical time) should be specified. The Julian date starts at noon because the European and North American astronomers who established this system didn't want to have to worry about changing the Julian day number, the integral part of a Julian date, during the course of a night's observations.

There is no discrepancy between the years in the definition given above (and in Seidelman) for JD 0.0 and that given by Mr. de Magalhaes, because there is no year zero in the BC/AD system. The year 1 BC is followed by the year 1 AD. However, the definition of modified Julian Date (MJD) should have been given as:

MJD = (Julian date) -2400000.5
Modified Julian dates begin at midnight, not noon. Again, for precise work the time scale should be specified. The use of MJD is not restricted to the Coordinated Universal Time (UTC) scale, though that is probably the scale in which it is most often reckoned.

J2000 is the designation of an instant in time; in astronomical parlance, an "epoch." This time is:

J2000 = 1.5 January 2000 = JD 2451525.0 TDB
or noon of 1 January 2000 on the Barycentric Dynamical Time (TDB) scale.

To describe the use of J2000 requires a bit more background. Many astronomical coordinate systems, including inertial systems, are described in terms of the orientation of the Earth's axis of rotation.

Because the direction in space pointed to by the Earth's north pole is not fixed — there are both short period and long period components (up to about 25,000 years for precession) in its very complicated motion in the sky — it is customary to specify a standard epoch for such coordinate systems so that their orientation is described precisely. These standard epochs are changed about every 50 years for practical reasons related to how astronomers specify the coordinates of astronomical objects in catalogues and use or determine these coordinates for or from observations. The previous standard epoch is designated B1950. J2000 (defined above) is the current standard epoch. The epoch of B1950 was not conventionally used as the origin of a time scale, and it is not expected that J2000 will be so used, either. Julian dates, or modified Julian dates, will continue to be used indefinitely.

Concerning the date-of-Easter algorithm from Carmony, L.A. and Holliday, R. L., A First Course in Computer Science with Turbo Pascal, Freeman & Co., New York 1991 (ISBN 0-7167-8216-2), I believe I can explain why the domain is restricted to 1900-2099. Although I have not analyzed the algorithm in detail, it is probably restricted because it assumes that leap years occur every four years, as was the case in the Julian calendar. In the presently-used Gregorian calendar, a leap year occurs only when the value of

leap = (year %4 == 0 && year % 100 != 0 || year % 400 == 0)
is one (true). Thus, the years 1900 and 2100 are not leap years, but 2000 is. 1900 can be included in the domain because Easter occurs after the trouble point, the end of February.

A discussion of the ecclesiastical determination of the date of (western) Easter can be found in Seidelmann, Section 12.22 (pp. 581-583). An algorithm for calculating the date of (western) Easter, valid for any Gregorian year, is given on p. 582. A discussion of the history of the Gregorian calendar, and why and how it replaced the Julian calendar, can be found in Seidelmann, Section 12.23 (pp. 583-584).

Sincerely yours,

John G. Kirk
Santa Barbara Activity
5266 Hollister Avenue, Ste. 117
Santa Barbara, CA 93111-2066

Wow. It's always nice to find a calendar junkie who's even more obsessive than I am. Thanks. — pjp

Dear Mr. Plauger,

Constance Veeney from The Netherlands wrote to me on the subject of the Easter Day function, adapted from Carmony and Holliday, which I included in my letter published in CUJ January 1994 issue. She points out that there's a remarkable similarity between that function and an Easter Day algorithm by Gauss, and most kindly provided me with the COBOL source code.

As a matter of fact, it appears that the Pascal code in Carmony and Holliday is a subset of Gauss' algorithm for the domain 1900-2099, while Gauss' algorithm domain is 1583-2199. Unfortunately the C version would be a bit too long to present in a short letter.

Sincerely,

Joao C. de Magalhaes
R. Almeida Garrett 16 5E
P-2795 CARNAXIDE
PORTUGAL

Dear Mr. Plauger,

I write to report a bug in the function free in your book The Standard C Library. The symptoms are that the return commented as "erroneous call" is frequently taken and memory is lost.

Listing 1 shows the function, with the line I suspect faulty highlighted. The code is trying to find a value of qp which is less than q (the address of the area being freed) but for which qp-_Next is after q. That is, the block q should be chained immediately after qp.

I suggest the test < should be reversed to >.

For reference/indication of what some people still use: I run the QC C compiler which accepts most of K&R, and was written by Jim Colvin for the Z80. The Z80 in question is my own design of 14 year's vintage. I am currently moving the compiler to a (radically different) NS32016-based home-brew and, wanting to use compliant library functions, am slowly typing in the contents of your volume, debugging hardware, compiler, and libraries as I go.

Yours sincerely,

PL Woods
14, Cromwell Road
Muswell Hill, London
England N10 2PD

You're quite right, it's a bug. My brother, Dave Plauger, reported it last year. (His company, Mercury Computers, licenses the Standard C library.) I admire your persistence in working in such a minimalist environment. — pjp

Editor,

I got the code for the data compression article by Philip Gage and tested it on a Sun Sparc running SunOs v4.1.1. I compiled the code as was in your archive using gcc -O, and timed using time(1). For the text file I used a Post-Script file, and for the binary file I used a Post-Script viewer executable (gv(1)).

I tested against compress(1), an LZW compressor, and gzip, an LZ77 compressor. Times are sys+user obtained from time(1). Times can vary 10% or so using time(1), and I made no effort to average them, although I did run some tests more than once for sanity checking. Note these times are not real time, which is strongly affected by system load. Listing 2 shows the results.

I don't know if these numbers mean gcc has a lousy code generator, or that Gage's code has constructs that run slowly on Sun. The numbers do strongly suggest testing on the target architecture before drawing conclusions.

Regardless, it's an interesting idea.

David X Callaway
dxc@dwroll.att.com

P.J.,

Just a quick note to tell you that I really think that CUJ has gome through a metamorphosis recently. Frankly, I had made up my mind a while back not to renew my subscription since I found the magazine increasingly less useful. My wife renewed it for me anyway and I must say that I am pleased that she did. The last several issues have been quite useful and interesting. I know that it is difficult to keep finding fresh topics to cover in fresh ways but you seem to have done it.

thanks,

Dave Rogers
dave@rsd.dl.nec.com
CIS: 76672,2455
M & R Software, Inc.

Thanks to both you and your wife. — pjp

P.J. Plauger

In the Jan. '94 issue of CUJ, Manual Lopez of Dallas, TX asks whether anyone knows a source for a PC edition of the VI text editor called PC/VI or VIPC.

Well, no, actually I don't know where to get this, but I can tell you that there's an excellent version of VI called ELVIS which is both maintained by real live people and freely available in C source form. I've used many VI emulators, and only ELVIS really "feels like" the Berkeley Unix VI we all know and love (or hate, as the case may be).

ELVIS runs not only on MS-DOS but also on Unix System V, Unix 4.3BSD, SunOS, SCO XENIX-286 and -386, Coherent 3.x and 4.0, Minix-ST and -PC, AmigaDOS 2.04, and Atari TOS. So if you want a VI clone which runs on all your platforms, ELVIS is probably the best choice.

ELVIS version 1.7 may be retrieved from uunet:/systems/gnu/elvis-1.7.tar.gz (it's 198,371 bytes in gzip format), and rumor has it that version 1.8 will be released soon. The primary author of ELVIS is Steve Kirkendall, kirkenda@cs.pdx.edu. Steve is a friendly and helpful guy who takes pride in his work.

Bob Weissman
171 Easy Street
Mountain View, CA, 94043
bobw@procase.com

Thanks for the information. — pjp

Bill,

A letter for your esteemed organ, etc. etc.:

Reading the January 1994 edition of The C Users Journal made me realize how quietly my company has marketed QA C++ — seeing the Gimpels follow suit with a C++ lint product is most gratifying (we went live with such a product in 1992).

Of course, the Gimpel product, based on PC-lint for C cannot really do justice to the problematic edifice that is C++, although I'm sure they will catch up over the next few years. The C++ do's and don'ts have many sources — Plum/Saks, Meyers, Cargill, and Henricson/Nyquist (their excellent public domain Programming in C++: Rules and Guidelines) — which all C++ tools would do well to incorporate.

I commend Gimpel Software for following Programming Research's lead in this area.

Regards,

Sean Corfield
sean.Corfield@prl0.co.uk

Nice of you to mention your own product in passing, Sean. Cheers. — pjp

P.J. Plauger:

I'm not sure I agree with reader Russell Hansberry that ANSI C standards for embedded systems are necessary. I've been doing embedded C programming for years (sometimes developing rommable applications). When doing embedded programming, it is necessary to understand how the underlying implementation works. Stdio functions often aren't useful except sprintf to format a string if necessary). A prudent approach may be to assume you have nothing. Every function call is a function call you provide. You can provide whatever parts of the library you actually need, and make it reentrant (or understand why it isn't reentrant, and implement appropriate protections around it). ANSI C does two things:

1) provides a set of functions you can assume are there

2) provides a specification for the language

For embedded work, the second is useful, the first is of dubious value.

marty
leisner@sdsp.mc.xerox.com
leisner@eso.mc.xerox.com

P.J. Plauger

I do appreciate the nature of English, American English, and technical English to bend, adapt, and conjure up new words on demand, but I have to object to your complexification in the CUJ Editor's Forum from February. Really, rather than "complexify," what is wrong with standard English such as "complicate?" I cannot for the life of me see any nuance in your context which might differentiate the two.

Other than that, keep up the good work.

Ciao from NZ,

Arnim

My standard for English is the venerable Oxford English Dictionary. While it brands "complexify" as "rare," it does give a citation dating back to 1830. I believe the dominant language in New Zealand then was Maori. — pjp