Departments


We Have Mail


Dear Sir,

I started my first job in October of last year. My assignments included the design and implementation of C/C++ code for plotting images and graphing signals on a Compaq 386 machine with a VGA monitor. The compiler I had was Turbo C++ v1.00, an old copy of Borland C++. Since then I have been working hard on both projects and by now I have a sizable library that achieves part of my goal.

It was only recently that I got introduced to your magazine. I read the articles and thought that some of my work may be of interest to your readers too. I am going to describe some of what I have done so far and, if you find it interesting, I can send the codes along with a description of the algorithm to be published.

Now, as I have already stated one of my tasks was to design a signal plotter. As far as the graphing of signals is concerned, I have had some previous experience with using "Matlab" and "Mathematica," two of the leading signal-processing packages. The problem that I usually encountered with both packages was that for any given function (e.g. sin(x)*exp(-2*x)), the first step before plotting or doing any manipulation with it was to generate a vector corresponding to the values of that function. Now if the function was something simple like sin(x), this turned out to be an easy one-step process. However if it was a little complicated like the example above, you had to generate vectors corresponding to each step in the function and then by subsequent operations get to the resulting vector corresponding to the function. This process was very tedious and if you had to plot a function a few times with the change of a scale factor, it could mean hours of work.

Metlab procedure for generating sin(x)*exp(-2*x):

1. generate vector corresp. to sin(x)v1

2. generate vector corresp. to -2*xv2

3. take pointwise exp of v2v3

4. pointwise multiply v1, v3resultant vector.

One way to combat this problem is to define the function within your program. This is a wimpy solution though because every time you want to plot a different function or perform some other operation on it (like the FFT) you have to go into the program (or library) and change all instances of that function. A second, more-convenient method would be to define a routine, which, when given the mathematical expression, in the form of a string, and the values of the input parameters, should evaluate the function. This is kind of like what some of those high-tech calculators do. They, however, implement it in hardware, whereas we want a software solution to it. Tandy for instance produces a pocket-sized scientific calculator that does such calculation for you.

In BASIC you can compute the result of a function on line by typing ? followed by the mathematical expression. The problem only they have is that you cannot expand on the default function or operator set, that is to say, user-defined functions or operators are not permissible. Also, if you wanted to change the priorities in the calculation of operators, it would not be possible, since those are set by default according to standard math convention.

I have been working on this very method of solution to our problem. I don't know of any existing software that tackles this problem completely. After a few months of effort in designing and implementing the algorithm (in TC++), it has turned out to be a success. Another side result of this function also is that it could be used to design a software calculator. It only took a few lines of extra code to come up with DOS's equivalent of Macintosh's "Calculator." It may not be as efficient, since it is a more general program but it works fine.

Now since the basic function has been implemented, it could be used in lots of other signal-processing applications. The first one that comes to mind would be to use it in a graph-plotting routine. You can input the name of a function and it should be plotted for you. The essence is that this function can be used for generating vectors of a mathematical expression which in turn are used for plotting graphs, taking DFTs, FFTs, convolutions, and implementing other signal processing algorithms. All you have to do is enter a mathematical expression in string format and give the value or array of values of its input variable or variables. The other useful point about my algorithm is that I also tried to optimize it for speed. This will add to the performance of any future software that will be written using this function.

Other than the work I have described above, I have also made advances in image plotting, palette manipulation, quantization, and windowing capabilities. I know that it is a well-saturated field, but if you are interested, I could send you a description of what I have done there.

I am still working on the above problems, continuously expanding my library by adding more functions to it. This is my first job and first integrated effort towards achieving something in the professional field. My programs may not look very professional but I have tried my best and am continuously improving their format and style to match up with the standards. So far I have managed to get them to run though and get the required results out of them. I hope you will not disappoint me and take the time to look into my efforts. Thanks.

Sincerely yours,

Aimal Siraj
H #334, St # 8,
F-10/2,
Islamabad,
Pakistan

I encourage you to watch our Calls for Papers and ask for our Author's Guide. You might find occasion to submit an article for publication in CUJ. — pjp

Dear PJP:

Before getting down to gory details, thanks for your ongoing articles in CUJ, and for your latest book on The Standard C Library.

Your latest article, in the May 1992 issue, on "Text to Numeric Conversions" was of considerable interest to me because it related to my current activity on numeric conversion connected with an all-ASCII algorithm for compressing numeric data.

I rushed to type in your Listing 4 (xstod.c) because I was anxious to see how it retained precision. In case you are wondering why I "wasted" the time to key the whole works, keying makes me look at every last bit of code (more than once because the result has to be proofread, not only by me, but by that unbiased observer — the compiler). Reading someone else's code puts me to sleep, but keying keeps me awake while the code penetrates my mind.

Before long, I was really awakened by multiple omissions of the second minus sign in the — operator. It suggested that the listing had been rekeyed for publication or passed through a formatting program which applies some editing. The former aroused the suspicion that there might be other errors or omissions.

But, I checked against Fig. 13.19, p 364, of your book, and everything else seemed OK. Of course, the keying didn't end with _Stod. Next, there was xdscale.c and sdunscale, which led to svalues.c and xdnorm.c, and finally your version of math.h. (Fortunately I had your book because they aren't in the paper.)

Eventually, the whole mess ran under Quick C 2.5. I typed in a few numbers like 123, 123.45 — fine. Then, I tried 00.123 which gave .00123 (sic), and .00123 which gave .0000123 (sic). Not so good! Leading zeros get repeated after the decimal point. (No problem with leading zeros if there is no decimal point.)

I began to wonder about omissions from the source (in case _Stod had been rekeyed manually). So, I put together the appropriate files from the distribution disk. Same result!

That made me really look at the code (with the debugger). Meanwhile, I'm wondering: "Is this an April Fool Joke?" No, it's in the book too. "Can PJP be testing how well readers check his stuff?" No, also ... maybe in the magazine, but not in the book.

I'm also wondering what numbers Warren Yelsin (who you credit with checking _Stod) actually checked. Did he use a different version of _Stod??

There are two problems:

1. _Stod calculates a correction for leading zeros via olead in the first pass and applies it to lexp for numbers containing decimal points. This is not necessary because the second pass handles the zeros properly. In fact, the olead correction screws things up by inserting zeros in unexpected places as you can see in the results cited above.

2. In the second pass, fac[] is initialized with a leading 0 element. This is an error, and throws away a block of data by multiplying it by zero.

The enclosed disk contains two .exe files and the corresponding sources. TSTSTOD, which contains the above errors. Both are made up of source files from the distribution disk from your book. XTSTSTOD is a version in which the above two errors have been corrected. I am enclosing partial listings which contain the main (test) program and _Stod so you don't have to rush to a printer.

XTSTSTOD still does not produce the correct results for numbers containing more than 7 digits. That apparently is a problem with _Dtento and/or the routines which it calls because the arguments to _Dtento look OK. I haven't the time right now to chase down that problem.

Having said all this, I can't help worrying that somehow I have missed some crucial point which changes everything. Sometimes that is the way it is with computer software.

My guess is that the wrong disk containing an early version was used for both the book and the paper. This is really worrisome because it raises the question of whether the final versions were used for other listings in the book and the source files of the accompanying disk.

Regardless of any problems with the sources, I greatly appreciate your efforts to explain the C Library functions and other matters. My efforts to understand the above problems has been a worthwhile exercise.

Sincerely yours,

Robert S. McDonald
9 Woodside Dr.
Burnt Hills, NY 12027

You have indeed located a bug in the function _Stod in The Standard C Library. The net effect is that leading zeros get inserted after the first nonzero digit. (Neither Warren nor I seem to have exercised this particular test case.)

The fix you propose gives correct answers for many cases, but it has a drawback. Leading zeros displace significant digits. Thus a number like

0.00000 00000 12345 67890 12345
retains only about five or six significant digits, instead of all fifteen. My fix is to correct the logic that counts strings of zeros, rather than discard it:

if (olead 0)
     olead = nzero, nzero = 0;
If you look more closely at how fac is used, you will find that the initial zero element causes no problems. It is never accessed.

Thanks for your careful attention to the code. You and several other readers have helped find and correct a number of (mostly small) errors. I plan to issue an updated code disk soon that fixes all known bugs. — pjp

To The Editor:

A letter in the June issue described a "portable" UNIX method of searching subdirectories for locating files. The writer then used this example to berate "proprietary and restrictive operating systems," such as MS-DOS. His "My school is better than your school" attitude is typical of UNIX worshipers.

While I do not want to get bogged down in the details of this specific example, I am curious about a couple of points.

In what way is an operating system that can run on any of approximately 100,000,000 machines in the world "proprietary"?

Does UNIX have absolutely no restrictions whatsoever?

Doesn't the immense training necessary to use the UNIX tools count as a restriction?

Does his computer dictionary define the word portable to mean "able to run on any UNIX system?"

I work regularly in UNIX, MS-DOS, OS/2, and other operating systems and environments, and I agree that UNIX is an excellent operating system for many tasks. But there is no fundamental truth about the universe embodied in UNIX or any operating system.

No operating system is the best; each has strengths and weaknesses. They all will be unceremoniously discarded as soon as a better operating system for the task at hand is developed and (more important) the user requirements dictate a change. So lighten up, UNIXheads. It is an operating system, not a religion.

Tim Berens
BOA, Inc.
6691 Centerville Business Pkwy.
Dayton, OH 45459

I have testified in court that UNIX is indeed a proprietary operating system. It just happens not to run on proprietary hardware owned exclusively by the same company. You can say the same for DOS and the IBM PC. As a former UNIX booster, I can only endorse your more ecumenical (or cynical) view of the marketplace. — pjp

To CUJ:

This is in response to A. Bernay, writing from the land of oz.au. Avoiding Americanisms, or at least explaining them, strikes me, I think, as fair dinkum. Perhaps the following definitions will help international understanding.

Pointer: Equivalent to the system's natural internal representation of an address, except on MS-DOS machines, where an address has no natural representation.

Bit field: Expression akin to Kipling's "E's chawin' up the ground," and to the Americanism "to bite the dust"; hence, a place of hardship.

Paradigm: Worth twenty cents U.S. An inflationary development of "putting in your two cents' worth."

Encapsulation: A better way to take aspirin.

I'm sure these are not all the definitions wanted. Interrelated terms like "big endian" and "dining philosophers problem" surely deserve treatment as well.

My point is not to mock the suggestion but to endorse it. So often, the names coined for things bear little definite indication of their referents. If human language were orderly and consistent, we would not need computer languages; we could compile English. One important result of standardized formal grammars like C is better human communication, despite the barriers sometimes raised by a shared natural language. A problem reader Bernay did not address is that programmers of the same nationality often confuse one another.

Chuck Marsh
331 Coleman Dr.
Monroeville, PA 15146

Yup. By the way, have you ever noticed that a dime displays not a single hint that it is worth ten cents? — pjp

Dear Dr. Plauger:

I need your help! I am currently involve in developing a DOS application program, utilizing C/C++, that must support multiple devices. Encoding the functions to support possible devices makes my program big. I don't like the idea of writing .SYS device drivers since the user has to install it in the CONFIG.SYS. However, on most professional DOS application I have noticed that they contain multiple .DRV files. What are these files? How do they communicate with the .EXE files? How are they created, compiled or linked? Can you recommend any book(s) or article regrading this subject? I will appreciate any advice you can give me. Thank you.

Sincerely

Teofilo P. Torres Jr.
27 Robertson Rd.
West Orange, NJ 07052

As far as I know, there are no standards for writing the kind of loadable drivers you describe. Everybody makes up ad hoc solutions. — pjp

Dear Dr. Plauger,

I wrote to you some time ago lamenting the fact that the ANSI C Standard Library omits the capability of reading back in (via strtod) the "NAN" and "INF" that printf so usefully writes. You note in your book (The Standard C Library) that a compiler writer could implement that through the locale mechanism, but I see no indication of that happening in my lifetime. (A published implementation of such a locale would be a great help in convincing compiler companies — e.g Symatec THINK C — to include such an IEEE locale.)

Is there any chance that the C++ standard could correct this oversight, acknowledging that essentially all new computers incorporate IEEE math and that scientists and engineers really need and use NANs and INFs, and allow us to read these numbers as well as write them?

Love your mag (CUJ),

Denis Pelli
Syracuse University

PJP replies - The people to talk to are the Numerical C Extensions Group (aka X3J11.1). They are in the final throes of specifying how to extend C to better support IEEE floating point. Contact Rex Jaeschke (rex@aussie.com) for information on NCEG.

Mr. Plauger,

thanks for your time. I read your articles in The C User's Journal and Computer Language frequently. I am a principal software engineer at Traveling Software, and as such am responsible for keeping our staff abreast of technological developments and so forth. What I would like to know is how I can get a copy of the ANSI (C) standard (in electronic format, preferably)? I have access to CompuServe and INTERNET. Please let me know by return mail. If you can, cc my alternate email address: 76447.2147@compuserve.com since our gateway from TSI to INTERNET is rather bad.

Thank you once again,

Ron Aaron

PJP replies: Dave Prosser, editor of the C Standard, is official keeper of the machine readable version. He works at AT&T Bell Labs in Summit NJ.