Departments


We Have Mail


Dear Editor:

Regarding your and several readers' reactions to an impertinent request:

I have taken a lot of beatings for daring to ask if anyone has the source to the Turbo C 2.0 library (and is willing to sell it). I mentioned that I had asked for the $150 source, but that Borland refuses to service Turbo C 2.0 owners. What would you say, if you purchased a brand new car only to find out that the manufacturer insists you buy their next model if you want any spare parts? Turbo C 2.0 has a few points, which I will never know about, like:

Why is the sprintf/cputs combination about 10 times as fast as the regular printf?

Try to get any input from the keyboard from getch or getche, after an freopen(filename, stdin) and prove their manual wrong.

Prove the _stklen/_heaplen documentation wrong.

Now, if you know that their C 3.3 and 3.1 (+ +), where + + seems to refer to their price tag, sells poorly, you have a nice example for planned obsolescence — in an industry whose products theoretically never grow old. Unless you also produce the operating system your product relies upon, you have little leverage to make your product obsolescent. Microsoft can play that game. How long until Borland buys up some outfit to enter the operating system game? Microsoft C++ seems to be selling on that count only.

Regarding your Power C review (September 1992):

I encourage you to continue the new practical "commercial" orientation and I'd like to see more frank reports of the same style. (Wish list: cross-platform compilers, cross-platform libraries). The only thing the report fails to tell us is why Power C is really inferior to Turbo C and Microsoft C. Well it is. Power C takes more than twice as long to compile, and code is about 60% to 70% longer than for the Turbo C 2.0 compiler. And then Power C is a little pedantic. Example: Turbo C accepts

static int unsigned i;
Power C does not! Example: Turbo C accepts:

printf ("Time lapsed: %u.%#0*d
seconds", 2, centiseconds);
while Power C does not! Question to the standards committee: Who is compatible with the standard? Question to all concerned: What is the use of a C Standard, if none of the mentioned compilers works under more than one operating system? And the champions of reusability (C+ + hey?) tell us we can use the same code over and over again, provided it is MS-DOS? Who is kidding? Anyway, I'd like to see more reports in that line.

Regarding the CUG Library:

I have pointed out at least a dozen mistakes and oversights in source you keep distributing. Do you ever contact the authors? In my opinion it would be about time to clean out the library a little (which in general is at least as good as any other public domain/shareware source). E.g., you may safely discard any of the plentiful "C source analyzers" which are built on less than a full scanner/parser/preprocessor. Even the Brandt/Brown compilers and the Sherlock programs are bugged. (I think they are great products). I have stopped informing you. Anyone among the readers interested?

Regarding anything fit to print:

I also feel the time has come to open a black list. I have fallen victim to enough "null-terminators," from straight office-suite-crooks to software amateurs, all of them great values according to the ads. I think there is a need to be frank and inform about this aspect of the industry also.

Sincerely yours,

Ludger Engbert
Taunusstrasse 8
D-6384 Schmitten-Arnoldshain, Germany

I got the message that several practices in our field make you angry. Some of them make me angry too. My experience is that telling people they're wrong seldom motivates them to get "right," by your standards, particularly if they don't completely buy into your standards. I have also observed that you can report on things that are less than perfect, not get the outcome you want, but still be effective in changing the world. Few things change for the better as fast as we'd like. — pjp

Mr. P.J. Plauger:

Does ANSI C resolve this problem that I am having with datatype char? The problem follows:

When I am reading a byte of binary data using char datatype, any value below 7F hex maps directly into an existing ASCII character. However, when the value exceeds 7F, like 80 hex up to FF hex, my C compiler doesn't report an error on compilation, but it spits out the values like below:

binary byte      hex
00            0
01            1
02            2
.             ..
7F            7F
80            FFFFFF80
81            FFFFFF81
.             ..
FF            FFFFFFFF
I believe this is compiler dependent, but I was wondering if this problem will be fixed in ANSI C or is there some other workaround.

By the way I was not able to send you email directly to pjp@plaugher.com. I received from my mailer-daemon that it doesn't exist. Perhaps I am missing some other information you are not telling me.

Thank you,

David Fang
daffy@chips.com
Chips & Technologies
3050 Zanker Road
San Jose, CA 95134
408-434-0600 x2443

Nothing is "wrong" with the result you get. Your implementation simply treats char as a signed representation, which it is at liberty to do. If you don't like that result, use type unsigned char instead.

You can get e-mail to me if you spell my name right. Powerful cosmic forces keep trying to inject an h into my surname. — pjp

Letters:

You published a nine-page article in your February 1992 issue that explained how to do wildcard subdirectory searches under MS-DOS. I wrote to point out that under any variant of UNIX, the same job (and more) could be done in one-line. I concluded that I was thankful that I used UNIX rather than a proprietary and restrictive operating system like MS-DOS.

The editor replied in the June 1992 issue. He did not dispute my point but stated that he preferred MS-DOS which had 20,000 software packages available at a fraction of the selling price. Then in the November 1992 issue, you published Tim Berens' letter that again did not dispute my point but instead questioned how I could call MS-DOS "proprietary" when MS-DOS runs on "approximately 100,000,000 machines in the world". Mr. Berens accused me of being a "UNIXhead" and the editor endorsed the letter.

You both seem to have forgotten that The C Users Journal is directed to programmers. I pointed out that programming under MS-DOS can be needlessly harder than programming under UNIX (as in nine pages vs. one line). I rest my case since both of you chose to ignore the merits and instead changed the subject to the quantity of MS-DOS computers and programs. If that isn't religious zealotry (but for MS-DOS), then I don't know what is.

However, to briefly reply on the merits to your assertions about MS-DOS: millions more have read comic books than have read War and Peace. Does that mean comic books are great works of literature while War and Peace is not? Don't confuse quantity with quality. My Webster dictionary defines "proprietary" as: "something that is used, produced, or marketed under exclusive legal right of the inventor or maker." To me, that sounds a whole lot more like MS-DOS than UNIX.

UNIX runs on computers from PC's to Cray's; Microsoft limits MS-DOS exclusively to Intel 80x86-based systems. Check almost any college computer to find UNIX source code. Microsoft limits MS-DOS source code to a very exclusive few, such as IBM. If you produce a major UNIX application, you probably own UNIX source code. If you produced a major MS-DOS application, you probably own undocumented MS-DOS, just purchased undocumented Windows, and are watching news of the rumored FTC Investigation that Microsoft allegedly reserved parts of its operating system exclusively to itself, via secret functions unavailable to Microsofts competitors.

So lighten up DOSheads. Bill Clinton said that protesting a war doesn't make one unpatriotic; and I say validly criticizing MS-DOS doesn't make DOS a bad or low-volume operating system — just a difficult programming environment. And again, that is why I am thankful that I do most of my programming under UNIX rather than under MS-DOS.

Andy Levinson
11575 Sunshine Terrace
Studio City, CA 91604-2938

I'll lighten up if you will. I didn't challenge your original statement because I absolutely agree with it. Programming under UNIX is often vastly easier than under MS-DOS. But I will challenge your latest thesis. UNIX is no less proprietary (and no more) than MS-DOS. It's proprietor simply chooses a different marketing style. For the record, I personally don't {hate love} {UNIX MS-DOS}. Each has its advantages and drawbacks which you've neatly outlined. — pjp

Dear Editor,

I have been enjoying your journal for some time now. I feel it is the best of its kind. I would like to make my wishes known for possible future articles. The following is my list:

1. Queues, Dequeues, and Containers. What are they and their purpose?

2. Function Pointers. What are practical uses?

3. Double pointers. What are practical uses?

In advance thank you and keep up the good work.

Sincerely,

Bob Buchanan

Noted. Thanks for the input. — pjp

Dear Dr. Plauger:

As developers of the COMPEDITOR, the CASE Finite State Compiler, we appreciated Alan Cline's article, "Build Applications Faster with State-Transition Automatons" in the December 1992 issue. Articles like his are the only way new developments in software engineering are introduced to the programming profession.

We were surprised that he did not use our compiler to develop the state tables and code. Perhaps he wasn't aware of it. The COMPEDITOR forms state tables and source code from data keyed into the table. Talk about fast development, it took about five minutes to design each automaton with the COMPEDITOR, it formed their source code in a few seconds.

However, we disagree with Alan on the issue of state minimization. The problem with minimization is that a reduced table introduces another level of abstraction which makes changing or adding new functionality to the program more difficult. Often as not, a reduced table must be expanded before changes can be added.

Furthermore, one must coalesce many states before any appreciable savings occur since each COMPEDITOR state table cell uses only two bytes of memory. For example, just 20 bytes are saved by removing an extra state in a table containing ten events and states.

We don't believe it's worth a developer's time to remove any but the most apparent redundancy, especially since state reduction can make it harder to add new features to the program later.

Sincerely,

Allen Y. Edelberg
AYECO Incorporated
5025 Nassau Circle
Orlando, FL 32808
(407) 295-0930

Letters to the Editor:

I'm writing in response to the letter by Tim Berens in the November, 1992 issue knocking UNIX. I have extensive experience with UNIX and MS-DOS. (I ported MINIX 1.1 into 286-protected mode to teach myself about operating systems, using Aztec C on MS-DOS) I've used over 20 different C compilers on CP/M, MS-DOS, and UNIX. Most problems are solvable by investigating how it would be solved on UNIX, and then finding an appropriate way to do it on DOS. Solutions on UNIX make sense. On DOS, it's often, "Well, it works."

UNIX was a system designed to write C code. Many of the tools which analyze C code run well on UNIX and have problems on DOS. There is a religion to UNIX, it involves freedom. When it was initially invented, the attitude was, "If you don't like the way something works, change it if you're able." If you know how to do it, its very easy to change.

Mr. Berens raises a number of points:

Proprietary: Many flavors of UNIX are not proprietary. There are teaching tools based on UNIX as a model (MINIX and XINU). BSD386 is free and based on the NET-2 release of Berkeley UNIX. It is possible to write software which runs well on both DOS and UNIX if you know how to do it on UNIX.

UNIX needs immense training: No argument here. I wonder about Mr. Berens expertise with UNIX. Experience and expertise are very different. The need for training and consultation are clear. Much consultation is available free on the Internet. In addition, there are wonderful examples of C programming easily obtainable (GNU, for example). It's very useful to take apart a working program which is agreed upon to be of superior quality. UNIX is very flexible, but it has complexity.

UNIX means portable: Actually, if a program can run on many flavors of UNIX, it is often very portable. But portable software is hard to write and takes experience. The definition of portability is a program which has been ported. Many DOS programs (which are ASCII-text based) use direct keyboard/video BIOS calls. (I don't think it's necessary.) When you hook up a terminal to a COM port, and execute the ctty com1 command, you rapidly see how many commands break unnecessarily. It is possible to write portable C code for DOS and UNIX — but when the header file <dos.h> is included, it goes from ANSI C to vendor C. Many of the packages available on UNIX (curses, the Berkeley openedir routines, etc.) are available on DOS.

On DOS, machines constantly hang up during development. At least on UNIX they core dump and allow post mortem analysis. (You can work backwards. On DOS you generally have to work forwards from main.)

In A. Levinson's letter in June, he talks about a find command on UNIX. Most UNIX commands are available on MS-DOS. (Most are freeware. The MKS toolkit is of high quality and runs on DOS.) In addition, the Berkeley opendir routines work extremely well on DOS, giving portability to directory searches. Using multiple systems, it is highly desirable to have them work as similarly as possible, at both an application and a source code level. This makes portability of user skills and writing source code. Source code is useful not only for changing the behavior of programs but to understand how programs work. It is possible to write portable DOS/UNIX source code for almost everything which fits on DOS. In the June issue, 20,000 commercial packages are talked about in DOS's favor. Many of these packages are quite bad. As an experienced programmer, I find the lack of make-able source code is a hinderance. What I've seen on both DOS and UNIX is the available of freely copyable source improves the quality of programs. (I've recently bought a PC at home and have not spent a cent on software. I'm typing this in Elvis, a vi clone GNU puts out (which runs on DOS and UNIX)). I also use less (instead of more). Both run on MS-DOS and UNIX.

Sincerely,

Marty Leisner
leisner.henr801c@xerox.com
leisner@eso.mc.xerox.com
(716) 654-7931

As one of the original users of UNIX, I have to agree with much of what you say. If you have the time and the inclination, you can do a lot and learn a lot as a UNIX programmer. But if you just want to buy software that works, it's hard to beat DOS for price and availability. — pjp

Dear Mr. Plauger,

First and foremost, I apologize for misspelling your name in a previous letter!

You've mentioned in recent editorials and comments that you're working with the ANSI C++ committee. That makes you a perfect target for my suggestions. I like the language a lot (just as a I like C), and think that it's well-designed and well-thought-out. I do have a problem, though.

A number of my functions — especially those critical to speed and/or size — have been written in assembler. (More to the point, many of them were written some time ago, and I really don't want to re-write them.) There's no easy way to marry external functions to classes at present, and I wish the committee would address this. I can't be the only one with this problem.

I could use short "dispatcher" functions as class members, and let them call my library, but that adds to code size and slows execution. Why not instead add class prototyping to the language, with the ability to write member functions in other source modules? Say, much in the same way that ANSI function prototyping has been implemented for C? (You can put the prototypes into a common header and define the actual functions where you choose.)

(Actually, it'd be nice if someone finally acknowledged that no one writes a 64K source file, and provided ways to implement encapsulation across several source modules. Why not use something like a master project file with individual protected sub-project files? Better yet, why not create a pointer-to-member concept that will let me prototype a class, then put the members — Y member — where I choose?)

Just a thought. Thanks,

Stephen M. Poole, CET
122 N. Main Street
Raeford, NC 28376

The C++ committee is just starting to get more formal about handling requests for extensions. They have a considerable backlog to work off at present, and growing resistance to new ideas. A recent issue of SIGPLAN Notices (sorry, I forget which) lays the ground rules for proposing extensions to the committee. — pjp

Dear Sir,

I read with interest in the November 1992 issue of CUJ, the Q & A discussion of "Check Digits for Error Detection." I was recently involved in a project requiring the entry of credit card numbers into a database for billing purposes. During this project I ran across the check-digit algorithm used for credit card numbers, which may be of interest to Mr. O'Haire. Like the Social Security tradition alluded to by Mr. Pugh, I received this one through word of mouth. However, this one has been verified in daily use with thousands of card numbers and it works for MC/VISA, AMEX, and Discover. It does correct for transposition of adjacent digits, which is a common typing mistake.

Starting with the last digit of the credit card number and proceeding in reverse order, multiply each digit of the card number by an alternating sequence of 1s and 2s to form individual sums. In those cases where the multiplication exceeds 10, subtract 9 from the sum. Now add all the sums together and the result must be divisible by 10. That is, the last digit of the card number is fixed to make the total sum divisible by 10. Consider the following example:

Card # 4011-7231-9528-4803

Compute 3 x 1 + O x 2 + 8 x 1, etc. to get a total sum of 70, which is divisible by 10.

I hope you find this algorithm of interest and of possible use.

Sincerely,

Christopher R. Skonicki
7304 Jonathan Way
Louisville, KY 40228

Thanks. I'm personally always curious about these little algorithms that permeate our lives. — pjp

Editor:

I am writing in reference to the "Ross Data Compressoion" article (CUJ, Oct. 1992, p. 113). I have tried the program on several text and binary files and it works. However, there appears to be something not right in the code.

In comprs.c (p. 114), the variable ctrl_bits is declared to be of type unsigned integer. And, a couple dozen lines later, it is used in an assignment statement:

*ctrl_idx = ctrl_bits;
However, ctrl bits is not initialized prior to the assignment.

I would like to use this code as a starting point of some work. I cannot, however, without a priori reason, trust an uninitialized variable. I would really appreciate it if you would clear up this little matter.

Respectfully,

August Grammas
4376 Cove Island Dr.
Marietta, GA 30067

Ed Ross replies:

The variable ctrl_bits is not initialized for two reasons. First, there is no meaningful value to which it can be initialized. Second, it is not necessary because it will be filled with information, one bit at a time, before it is referenced.

Mr. Grammas sees that there is no assignment to ctrl_bits in the listing before it is referenced and wrongly assumes that the program will reference an uninitialized value. He does not see that at runtime ctrl_bits is referenced only when the variable ctrl_cnt indicates that ctrl_bits contains a full 16 bits of information.

Sir:

Since CUJ is the only computer magazine to which I subscribe, to you I write to vent my frustrations.

Earlier this year (June) I ordered software from one of your advertisers, Strategic Software Designs Inc. (SSDI). Although SSDI was happy to accept my money (via VISA), SSDI was unable to deliver the software, and unwilling to answer my phone calls and faxes. Finally, on October 12 a phone call was answered and I was told a VISA credit was being processed. Either I was misled, or the American banking system is in bad shape.

To run an application development shop, I cannot afford to waste time or money. Yet I have wasted both on one order.

So, why am I writing? To ask for help ... To suggest a "Bravo and Beefs" department — a forum for developers to air their feelings.

A place I can read the testimonials of satisfied and dissatisfied customers.

A place where next month I will read that 300 people wrote saying they are happy with SSDI products and services, and one person who wrote with the same problem. Or, the opposite.

A place for the smaller software houses such as SSDI to be critically reviewed by their customers and peers.

The "Letters to the Editor department should be the forum for comments on CUJ quality, content, and style. And since this is a letter to the editor, I will address a comment to the editor: remember, a datum is, but data are.

Except for the minor financial loss, and reading too many "the data is," I feel better already. As they say, no pain, no gain. And the fewer CUJs, the fewer informed decisions.

Yours truly,

Peter Eberhardt
Eberhardt Associates, Inc.
288 Laird Dr.
Toronto, Ontario
Canada M4G 2X3
(416) 429-5705

You make a cogent case for a customer's gripe center, but I doubt we'll ever get into that business wholesale. An occasional plaintive letter such as yours is more than enough to keep me alert to rip-offs.

As for the great "data" controversy, I favor its use as a collective noun rather than a plural. The data is read in much the same way the coffee is brewed, and unlike the way the peanuts are shelled. Of course, we could go the home-boy route and pioneer the use of "the data be read in." Somehow, I suspect that would lead to a clash of cultures, however. — pjp

Dear Sirs;

I can understand how genetic engineering can create the seedless apple-orange, (see cover of CUJ January 1993) but I am puzzled about where you get the seeds to grow them.

Wayne Beard
1825 E. Third
Tempe, AZ 85281-2901

You get them the same way you deal with impossible software requirements — you subcontract. — pjp

Dear Mr. Plauger:

re: new unit for angular measurement

I was encouraged by Mr. Bertrand's article, which brought the CORDIC technique, involving a different method of dividing up the circle, to my attention. However, I had several complaints about the article, and considered proposing a follow-up article. Looking at the original article by Jack E. Volder reduced most of my objections to mere quibbles.

I believe that Mr. Bertrand's very first paragraph should have been presented more emphatically, to stress that the CORDIC technique is not highly accurate and thus is only appropriate for monitor displays (and similar uses). Members of the general public, such as myself, may overlook this point, while trying to grasp the subsequent exposition and code.

Much of the appeal of Mr. Volder's technique is in its simplicity. I suspect that this was necessary, in the late 1950s, to achieve adequate speed. This aspect is not obvious from Mr. Bertrand's presentation using C code. However, I do accept the obvious: that modern computers, running C, can still outpace the original CORDIC hardware.

On the other hand, I also suspect that we could now afford to improve the CORDIC technique by the judicious use of conditionals. I find it very annoying that 45 degrees can never be exactly represented, even though this is the very first rotation that we encounter. Having landed on it, initially, it seems ludicrous to flop around it for thirteen more rotations, but never get back to it. With only a two-credit C course as preparation, even I could improve things here. In this case, it might even save time. I do not see the expansion of the vector presenting a serious problem, when a three-way choice is possible for the angular rotation at each step (i.e. clockwise, counter-clockwise, or zero rotation). It will be expanded at each step in all three cases.

The article also reawakened an interest of mine: devising a new unit of angular measure. While high-level mathematicians may prefer radian measure, most applications are presented in terms of degrees. I believe that the degree is no more appropriate to the computer than is decimal notation. (I have come a long way since I was first offended by computers that did not return degree measurements in minutes and seconds.)

Even in the absence of binary-based computers, I would still be troubled by the degree. Ideally, our alternative to the radian would be based on classically constructible angles. These in turn lead us to consider the five known Fermat primes: 3, 5, 17, 257, and 65537. We are also free to bisect any angle repeatedly.

Dividing one revolution by three times five times the third power of two only gets us down to three degrees. Obviously dividing by the next three Fermat primes, or further powers of two will never allow us to produce an angle of exactly one or two degrees.

We could produce as small an angular unit as we might need by using only a large enough power of two as a divisor, but this would not allow for exact expressions for angles of 60 or 36 degrees. Since these angles, their multiplies, and their repeated halves are required for regular polygons and classic polyhedra, we must include three and five as factors in the divisor. This is a problem with the "CORDIC angle units" (i.e. CAU).

The CAU also fail to divide up the circle as finely as do seconds. We need to divide the circle by a number larger than two to the twentieth power just to equal the second. This should not be insurmountable, as even home computers are moving to a 32-bit standard, with 64-bit systems becoming available for engineers and scientists.

If you accept three and five as factors of the divisor, the question then becomes: should any of the other Fermat primes also be factors, or should we be satisfied to only use angle bisecting for any further refinement?

I am not so sure myself, and this is an important question for a new unit — if it is to ever gain acceptance. (Note that the degree has sufficed for about three thousand years, prior to the presence of the electronic binary computer. A new unit must do all that the old unit did, and have a clear advantage for use, internally, by a computer, as does the hexadecimal vis-a-vis the decimal number system.)

I think that I am borrowing somewhat from the idea that "form should follow function." (I believe that it was D'Arcy Wentworth Thompson that discussed this from a biological standpoint.)

In the case of angular measurement, the unit should allow for exact expressions for classically occurring angles. The degree does this. It should also not unduly emphasize insignificant angles. The degree fails here. Consider an angle of 50 degrees. It does not justify special mention, except that it is multiple of the degree. Finally, the new unit should work well in the computer environment; since the computer is able to save so much human effort. Here, the degree is acceptable, but not the radian. (The radian, being essential at the theoretical level, is not threatened in the least.)

In general, radical expressions are unavoidable in the values of the trigonometric functions. This implies some inexactness even with the most powerful computers that we can imagine. I believe that this problem is exacerbated by the level of nesting of radicals in expressions (after all denominators are rationalized).

Each application of the half-angle formulas add one more to the level of nesting. Constructions based on the Fermat primes have a similar problem, but it does not appear to me to be as bad.

For ease of discussion, I will name angles in degrees, and start with 90 degrees; since none of its trigonometric functions involve radicals. The half-angle formulas lead us to 45 degrees with one radical (for all but the tangent). Dividing ninety degrees by three, the first Fermat prime, gives thirty degrees; which also requires one radical (for all but the sine).

At the next step, 22.5 degrees requires one radical nested within another. Using the next Fermat prime, five, gives an angle of 18 degrees. Its functions also require one nested radical.

Next, 11.25 degrees requires nesting three radicals. This is also the case for the angle produced when 90 degrees is divided by seventeen. The functional expressions are much more complicated than those for 11.25 degrees, but what is important for the sake of accuracy, I believe, on a computer, is that the maximum level of nesting is three. Note that this angle is already less than half of 11.25 degrees.

I assert that 5.625 degrees will require nesting of four radicals for its functional values, as will 90/257 degrees (or 0.35019... degrees). The expressions for the former are much easier to obtain, but I believe that the work was done for the latter by 1832; although I have not seen Richelot's article. The point is that the work need only be done once, and then fed into the computer.

Even if we only use the first three Fermat primes as factors, we get down to an angle of about 0.35294... degrees (i.e. 90/255), with only the nesting of six radicals. (I believe that this is conservatively stated, as I prefer to have monomial expressions in the numerator, rather than allow polynomials, which might avoid some nesting. This way, subsequent steps, if required, are easier to obtain.)

We can come very close to the angle just mentioned, through the use of angle-bisecting only, with 256 as the denominator; but this should require the nesting of eight radicals.

It is the nature of these Fermat primes that their continued product is only one less than a power of two. Thus, 3 times 5 times 17 times 257 equals 65,535.

(At some point, infinite series will be considered as an alternative to angle bisecting, or constructible angles. I do not yet have any compelling reasons for using 16, 257, or 65537 as factors, and perhaps powers of two would be easier to work with. Definitely, both three and five must be factors.)

A denominator (for one revolution) in the order of two to the twenty-fourth power should be more than adequate for any applications within the next few decades, if not centuries. This might be obtained as the product of: 3, 5, 17, 257, and 256. An alternative is offered by the factors: 3, 5, 17, and 65536.

Angles are troublesome at best. In general, if we can state the angle exactly, we can not give a expression (in a finite number of terms) for the functional values (e.g. 40 degrees); and if we can express the functional values exactly, we can't give an exact expression for the angle (e.g. acute angles of the 3-4-5, right triangle). This is what makes constructible angles so important: they allow an angle, as well as the trigonometric functions to be stated exactly with a finite number of terms (theoretically, if not practicably).

The question may be, whether a new unit is justified. Theoretical work will continue to use the radian, and appplications will continue to only need an adequate approximation.

Thank you for your consideration.

Sincerely yours,

Lem Chastain
8210 4th Avenue, 3-J
Brooklyn, New York 11209-4431

I think you lost me more than once there. I, for one, favor the Eurpoean approach of expressing angles in quadrants. (A quadrant equals 90 degrees or p/2 radians.) It sure simplifies computing most of the trignometric functions. — pjp

Dear Sir,

After reading "The CORDIC Method for Faster sin and cos Calculations" by Michael Bertrand, I felt that you and Mr. Bertrand might be interested in the fact that the firm I used to work for employed the CORDIC method in its circular interpolation algorithm for its line of Computer (based) Numerical Controls (CNC) since the mid 1970s. The method was chosen for precisely the reasons Mr. Bertrand states: speed and accuracy.

In essence, our controls during circular interpolation, in which the control was required to move the slides of a machine tool (usually a milling machine or a lathe), had to compute a new point on a circle of known radius and origin on every real-time clock interrupt. Depending on the type of control, the real-time clock interrupt occurred once every 8 or 10 milliseconds.

Our controls were closed-loop systems, which meant that, during the realtime clock interrupt the current positions of the slides were read from A/D converters and correction factors for the error in the previous commanded position also had to be computed and factored into the newly computed position. All of this took time, of course, and the system was a foreground/background arrangement, where all other control functions were performed in the background. We had no floating-point hardware, so finding a method for rapidly (and accurately) computing sines and cosines in software was critical to the success of our circular interpolation scheme. The CORDIC technique fit the bill. We used it first on a control with a CPU based on the instruction set of the Data General Nova 10 mini, and then on a control that used the ubiquitous 8086 and the (not so ubiquitous?) 80186. The interpolation computations were done in fixed-point arithmetic and we held computational accuracy at 32 bits. By the mid 1980s, however the line controls had suffered from the malady of "creeping featuritis" and it was necessary to equip more and more of the controls with with the 8087 math co-processor.

Sincerely Yours,

Edward Kotlarczyk
HC 31 Box 5254 B-1
Wasilia, AK 99654

Dear Mr. Plauger

Rodney M. Bates' article ("Debugging with Assertions," The C Users Journal, October 1992) offers two suggestions that may make sense for his applications, but are not universally reasonable.

Bates writes that he "leave[s] assertions active in released programs." In Bates' specialty (compilers) there is little harm done by a failed assertion screeching to a halt. But compilers are unusual in this regard. What about transaction-processing, process control, and real-time data acquisition applications? I, for one, don't want an assertion failure to lose track of my ATM deposit. I especially don't want one to turn off my car or shut down the 727's engines while I'm flying over Chicago.

Bates also writes that "If a function of yours is called by code written by someone else . . . parameter validity checks you write should be error checks rather than assertions." That's reasonable if you're designing code for public consumption, but it adds complexity, inefficiency and defect risks if you're designing it as part of a larger system. When project team members work together on software they should carefully specify the interfaces between modules and use assertions to test compliance with these interface specifications.

This second point brings me to a semantics issue. Bates suggests the terms ensurer and relyer to describe the relationship between the calling and called routines. These terms suggest defensive programming doctrine. Many of us who once followed this doctrine have found it impairs software performance and increases complexity (read: risk of defects.) I suggest compliant and reliant are better terms. A compliant routine complies with, but doesn't necessarily ensure that all data meet the reliant routine's needs. When it doesn't actively ensure data validity, a compliant routine must also be reliant on a compliant calling routine.

My previous criticisms aside, this article was sound. It was a much needed overview of assertion. I hope it inspires more programmers to use this simple, powerful tool.

Sincerely,

William J. Hoyt, Jr.
President
The Softcraft Labatory
15 Columbus Avenue
Middletown, CT 06457
(203) 346-9219

You can use assertions to advantage even in an embedded application. Record the failed assertion, preferably with a time stamp, in a permanent log; then reboot the system. If you're really sophisticated, you can provide several levels of fallback. Pick the appropriate level of panic for each assertion failure. Heavy handed as this approach is, I often find it better than letting the code just stumble onward. — pjp

Dear CUJ:

In response to a letter from Bill Casey (CUJ, Vol. 11 No. 01, p. 136) regarding the purchase of an optional source-code companion disk for the book entitled The C Toolbox by William James Hunt, Mr. Casey stated that, "... the programs (listed in the book) referred to other functions which were not available anywhere in the book...".

Having read through code in various chapters of the book I must respond to Mr. Casey's observation by asking, "Where? What functions? Please point them out". I've used code fragments from Chapter 5, "Tools for Sorting" and Chapter 6, "BTREE: An Indexed File Module," either having modified blocks of Hunt's code with my own enhancements or just reading Hunt's code to understand his approach to particular problems. Following Hunt's examples has revealed no missing function source that I can see except for Standard C or compiler-specific library calls where you wouldn't expect to see such source or function declarations unless you cruise your compiler's C library header files or license the compiler's library source.

It should be noted that on page xvii of the Introduction chapter Hunt states, "Sample solutions for many of the enhancements discussed in the book are included on the source disks." Hunt's source disk is excellent (helpful batch and project files including many enhancement functions/code complete with discussions). The disk also points out some typo errors, one in particular being the correction of a parameter in a function declaration presented in the book. It'd be nice if P.J. Plauger would provide a "corrections only" disk for his The Standard C Library book. (C'mon Plauger, break down, man. Drop it into the public domain).

If it appears I feel somewhat stongly about Mr. Casey's accusations of Hunt's book it is only because I have found, The C Toolbox, Second Edition to be one of the few books employing the C language that is not a rehash of K&R and addresses tough problems over a wide host of fronts. One of the few "rides worth the fare."

Phil Pistone
Chicago II.

Dear Editor:

Just about the most common programming task required for each application I write for Windows is located at the beginning of WinMain, namely code to ensure that at most one instance of the application can exist. All this code has to do is return to the previous instance, if any. Sounds simple, right? Well, there's no Windows function to do it! Not only that, but the existing published examples I've seen fail in a number of simple situations.

Here is some generalized code that you can plug right into your applications to accomplish this common task. (The code is available on the montly code disk.)

This code has been tested in several applications.

Sincerely yours,

David Spector
President
Springtime Software
81 Amherst Avenue
Waltham, MA 02154
617-894-9455

[In correspondence, please reference

DS061 to avoid confusion.]

Sounds like one of those nuisancy little operations that the designers of Windows forgot to make easy. Thanks for sharing the code. — pjp

Dear Mr Plauger,

I work for a small belgian company as Technical Support Manager and am only an occasional C programmers. That is probably why I am faced to my current problem.

Some times ago, I have been asked by one of our clients whether I could provide him a tool that would warn him when less than 25%, say, of his disk space were remaining free. As I was not aware of such an existing utility, I decided to write it myself. For that purpose, I used a Borland C++ function called getdfree that is returning disk information in a structure of type dfree declared in dos.h as

struct dfree
{ unsigned df_avail;
/* nb of available clusters */
    unsigned df_total;
/* total nb of clusters */
    unsigned df_bsec;
/* bytes per sector */
    unsigned df_sclus;
/* sectors per cluster */
}
All the information I needed is there and I calculated the percentage of free space with the formula

 (unsigned long) dfree.df_avail * 100
               / dfree_df_total
Then I added a test condition and a warning message. And that was it.

All was going well until I gave the program to other customers. Some of them complained that they were getting the warning message although there remained a lot of free space on their machine and so asked me for a more accurate program. I checked their configuration and found that those complaining clients are using very large hard disks (several hundreds of Mb). I tested the program at the office on our very large Novell server volumes and also got similar problems. A more precise debugging showed that the problem is coming from the fact that if the disk has more than 64k clusters (maximum value for an unsigned integer variable), the calculated percentage is wrong.

So my question is: do you know a way to have this program works correctly with any disk? Note that there should be one as various utilities like Norton Utilities from Symantec and PCShell from Central Point Software are giving complete and accurate information for any disk.

Many thanks in advance,

Frederic Naisse
Technical Support Manager
30, Clos des Pinsons
B-1342 Limelette
BELGIUM

Dear Dr. Plauger,

I thought I'd be the last person to write to any programming magazine about an error, but I have found what appears to be an oversight in your String Library article. Being a processor of strings, I typed in the first listing and found that the str_nmid function returns the wrong value for the returned sub-string. Here, again, is the old zero offset thing. I include my solution to the problem.

char *sr_nmid(char *str, size_t pos, size_t num)
{
   char *mid;
   size_t len = strlen(str);
   pos--;
   if(pos >>= len)
   {                 /*outside str*/
       *str= \0 ;
       return(str);
   }
   if(pos + num >> len)
   {
       num= len - pos;
   }
   mid = &str[pos];
   memmove ((void *)str,
                (void *)mid, num);
   str[num] = \0 ;
   return (str);
}
My change is the pos-- directive which makes it return the correct value. In passing, I find CUJ simply the best for getting the job done. The others are too full of ads and themselves.

Thanks

William H. Logan
Global Weather Dynamics, Inc.
2400 Garden Road
Monterey, CA 93940

The C Users Journal

Perhaps an error in the code from article "An Essential String Function Library" by William Smith, January 1993. I believe the routine str_vcat will always duplicate the second argument. Simply deleting all references to Strl will correct the problem.

char * str_vcat(char * Dest, char * Strl, ...)
char * str_vcat(char * Dest, ...) <=== above line changed to
{      va_list  VarArgList;
       char     * Str;
       va_start (VarArgList, Dest);
       Str    = va_arg(VarArgList, char *);
       strcat (Dest, Strl); <====delete this line
       while (Str != nil)
       {      strcat(Dest, Str);
              Str = va_arg(VarArgList, char *);
       }
       va_end(VarArgList);
       return (Dest);
}
M. Thomas Groszko
Steelcase Inc. CD-4E-22
6100 East Paris Avenue
Caledonia, Michigan 49316-9139
616-698-4580