Departments


We Have Mail

Letters to the editor may be sent via email to cujed@mfi.com, or via the postal service to Letters to the Editor, C/C++ Users Journal, 1601 W. 23rd St., Ste 200, Lawrence, KS 66046-2700.


Dear P.J.,

I found that Borland C++ 3.1 interprets ios::precision as the number of decimal fraction digits, like f format in printf(), while Borland 4.x and MicroSoft Visual C++ 4.0, Sun, Solaris, HP, and Dec interpret it to be the number of significant figures. I couldn't find an interpretation for it in your Draft Standard C++ Library. Does the Standard specify? I don't have a copy except in an unopened CD-ROM.

Is there a story here? I'll guess that Borland C++ 3.1 got this one wrong.

Glen Deen

The logic is twisty here, which is why we develop standards. First, precision in iostreams is intended to have the same meaning as precision in printf floating-point conversions. Second, the equivalent printf conversion specifier is f for the flag fixed, e for the flag scientific, and g otherwise. The last case is the default for iostreams, which should thus treat any precision as a count of significant digits. I'd say that Borland 3.1 is odd man out. — pjp


Dear Editor,

There is a (admittedly minor) problem with the code supplied for the article "A C++ CGI Framework" (January 1997). The C++ file parser.c does not comply with the current draft of the ANSI/ISO standard.

Here is the snippet of incorrect code:

for (int i = 0; i < encoded.size(); i++) {
   if (encoded[i] == '%') {
      // convert escaped character
      decoded +=
         unescaped(encoded[i+1],
                   encoded[i+2]);
      i += 2;
   } else {
      // copy the regular character
      decoded += encoded[i];
   }
}

// change the '+' characters to spaces
for (i = 0; i < decoded.size(); i++) {
   if (decoded[i] == '+')
      decoded[i] = ' ';
}

The current ANSI/ISO draft says that the the variable i defined in the first for loop is no longer in scope in the next for loop.

Many compilers do not support this portion of the Standard, but when given a choice, I think it makes sense to publish compliant code and tell people to know their compiler vs. the other way around.

— Doug Young
dyoung@geoworks.com


Sirs

I am a relatively new subscriber to your magazine (two years), and of the stacks of publications I receive, it is always the first one I read, usually cover-to-cover. In my job as manager of large-scale Internet projects for a consulting company (Insource Technology, Houston Texas), I often find myself evaluating and often implementing diffferent mechanisms for implementing user interaction within web-based forms. For this reason, I found Richard Lam's article, "A C++ CGI Framework" timely and useful.

I do, however, take exception to one minor point which Dr. Lam makes towards the beginning of the article. Dr. Lam states that in passing data from a form to a script, the GET method is the "older and less flexible way" (as opposed to POST), and that "today, the POST method is reccomended." I disagree with this assertion, and I am troubled that legions of future developers will accept this as gospel, forever discarding what I feel is often the preferable way to "throw data around" from forms to scripts.

I feel it is often preferable for two reasons. The first reason is one of robust design and professional looking interfaces. When I design web pages for clients, they are usually concerned about how a page "looks" to the end user. This includes the URL. Utilizing the POST method is ugly, because it throws everything you are trying to send to a CGI application accross the URL path. In addition to lacking any aesthetic qualities whatsoever, it encourages people to "break the code" and try to pass elements on their own. In the hands of an inexperienced programmer, who may not have considered all possible types of input, this can cause CGI scripts to fail, sometimes with disastrous results from a security standpoint.

In addition to this, the GET method allows the developer to easily pass both user-defined attributes (from the screen elements on the form) as well as whatever state or other variables the developer may wish to pass along without the user's implicit understanding. This is indispensible for forms which may be one or two layers deep, but which are not quite complex enough for a "shopping cart" implementation. I also find this useful for passing cookie elements down to a lower level form, without having to incur the cost of re-accessing them. I know the POST method affords avenues that can also accomplish this, but the aesthetics suffer severely, and this is generally unacceptable to my clients, who want everything "neat and clean."

I think both of these methods (POST and GET) have their individual merits, and to pass one of them off as somehow inferior may not send the message that Dr. Lam was intending.

I enjoy your magazine, and recently renewed my subscription. Keep up the good work!

Andrew Lapsley
Manager
Internet Solutions Group Insource Technology
363 N Sam Houston PKWY E STE 1800
Houston, TX 77060
(281) 955 4355

Richard Lam replies:

Andrew Lapsley makes a good point, and I should have referenced my statement regarding GET and POST actions in the article. For example, see http://hoohoo.ncsa.uiuc.edu/cgi/. The HTML 2.0 specification (http://www.w3.org/pub/WWW/MarkUp/html-spec/html-spec_8.html#SE C8.2.2) is a bit more explicit, recommending GET when the CGI program has no side effects (such as database searching), and POST when the URL submission causes some change, such as a database update or subscription to a service.

I apologize if the statement in the article was misleading.
— Dick Lam


Dear Dr. Plauger,

It's nice to be able to overload existing operators in C++. Now, how about being able to create new operators? For example, in implementing a vector algebra class, one would want to have inner (scalar) products, vector (cross) products, and outer products. One would like to create new and distinctive symbols for these operators.

In the general case, one would have to specify operator precedence, so that a new keyword would be required. Furthermore, an interface to editors with graphics capability would be needed so that one could design the operator symbols.

I'm a little surprised that this capability does not already exist in a programming language outside of the proprietary math packages.

On another topic: I am an old-time IBM mainframe programmer (MVS, BAL, Fortran) who made the switch to microcomputers ten years ago. It has been amusing to see the microcomputer community reinventing such things as multitasking, real-time, network management, and virtual memory which the mainframe world has known for years. But where have channels gotten to? In the mainframe world we were always talking about channels: selector channels, byte channels, block multiplexer channels and so forth. (For the newbies, a channel is a path between memory and a device controller with its own independent mini-processor to handle data transfer.)

Initially there was no need to discuss channels for microcomputers, since microcomputers were essentially limited to the typing speed of the human operating them, anyway. But some years of watching the lights on my A and B drives alternating when they could be overlapping leads me to wonder anew. Now that microcomputers are bidding to replace mainframes, where are the channels?

Hoping you can clear this up, I am

Sincerely yours,
Peter P. Chase
pchase@SUL-ROSS-1.SULROSS.EDU

C++ is already approaching terminal ambiguity because of recent additions. Templates vie with operator and function overloading for the right to use limited argument type information on function calls. Both of these work at cross purposes with argument promotion rules, inherited both from C and from user-defined classes. A language with an extensible set of operators is a research project, not a(nother) "simple" extension to C++.

Even the earliest model of the IBM PC had several DMA channels, the microcomputer equivalent of mainframe I/O channels. A typical BIOS device driver starts the DMA channel going, then hangs the CPU waiting for the operation to complete. But we are seeing more and more genuine parallelism in PCs. The laptop I'm typing on right now can play an audio CD and repaints the LCD display with alacrity even as it compiles in one window and edits in another. Admittedly, task switching isn't as smooth as Unix managed a quarter century ago, but it's getting better. — pjp


Dear CUJ Editor,

I thought your response to Dan Oestreicher and Jack Wathey (CUJ, September 1996) was too short. Indeed, to program a "secret" (and I like that term!) implemention in C++, no extensions to the current specification are necessary. Dan Oestreicher hit the nail right on the head ... you make an implementation class, and then wrap that with the distribution front-end class (header file and obj/lib file).

Granted, any hacker worth his salt can figure out what's happening under the covers and (probably foolishly) hijack the class. But for class library creation and forcing programmers to "follow the encapsulation rules, dammit! Hands off!," Mr. Oestreicher's method is the solution.

Mr. Wathey's suggestion is syntatic sugar, and less flexible. (From FAQ-64 in the C++ FAQs by Cline & Lomow: When a module dreams of growing up, it wants to be a class. Similarly with these "secret" implementation functions.)

Sincerely, long-time fan,

John "Eljay" Love-Jensen
C++ programmer at large
jlove-jensen@carlson.com


Dear Mr. Plauger,

Having just debugged some code dealing with ASCII characters and the BCD format, I'd like to know why the inventors of the ASCII standard chose to make the hexadecimal character set non-contiguous? Did they not have the foresight to consider that one might want to exploit the numerical properties of ASCII [0-9] & [A-F]?

On the same subject, why were the sets [A-Z] and [a-z] broken up?

Just wondering,
Vince Pachiano
RBXdev@dayton.bassinc.com

The short answer is that ASCII was developed by communications types, not by programmers. And if you think ASCII has annoying properties, be grateful you don't have to deal with EBCDIC any more. The sets [A-Z] and [a-z] each have islands of non letters within their respective numeric ranges, though not within [a-f] or [A-F]. — pjp


Dear Mr. Plauger,

I've been wondering for some time why the STL functions don't declare the iterator, predicate, and function arguments and, in certain cases, return values as const (or non-const) references as in:

template<class FwdIt> inline
 FwdIt max_element(const FwdIt& first,
  const FwdIt& last);

template<class InIt, class Fun> inline
 Fun& for_each(const InIt& first,
  const InIt& last, Fun& op);

template<class InIt, class Fun> inline
 const Fun& for_each(const InIt& first,
  const InIt& last, const Fun& op);

Was there an actual decision made not do this, and what was the justification?

Was it perhaps decided that since the iterator arguments were almost always incremented or decremented anyway, auto copies of the arguments would have had to be always defined, so the copy might just as well get done via passing by value? But then, this doesn't seem to apply to function objects. For the latter, was it perhaps thought that using non-const references would make passing regular function pointers unsafe and that using const references would cause headaches when passing function objects with non-const operator() member functions? Then, why not provide both const and non-const versions?

My personal orthodoxy is to always use const& over passing by value for non-builtin types and I mentor fellow team-members accordingly. I would like to be able to supply a good explanation for why the proposed standard library sets a precedent for not making use of this practice.

Thanks for any light you can shed on this.

Jerry Liebelson
jl@ingress.com

Alex Stepanov, Meng Lee, and David R. Musser, the developers of STL, did very few things by accident. My understanding is that they did intentionally choose value semantics for iterators and function objects (including predicates). Certainly, an important goal was to permit object pointers to serve as iterators, and function pointers to serve as function objects. And an important design assumption was that even the most elaborate iterators and function objects should remain reasonably lightweight. STL follows a number of design rules that are at odds with more conventional C++ class design, but it's pretty consistent internally. — pjp


Dear Dr. Plauger,

We need a date2000 compliant routine that we can embed in a C-based program where we can pass it a date in one of several formats and request that it return the date in a specific format.

If you can point me to a routine like this from any source, freeware, shareware, or expensiveware, I would sincerely appreciate it.

Thanks,
Mike Kitchens
mckitch@atl.mindspring.com

Most versions of strftime that I run across in Standard C library implementations get year 2000 issues right. Check your local compiler before looking too far afield. — pjp


Dear Mr. Plauger,

I have been programming in C/C++ for several years now, and am a CUJ subscriber. I recently upgraded my Microsoft Visual C++ compiler to version 4.2 and installed all the cool new stuff including the ANSI standard headers and STL. Not wanting to be the last one on block to start using the ANSI standards, I decided to try some of the new ones. I didn't get too far.

Since the applicable ANSI standard headers provided by Microsoft have your name in them, maybe you can explain what's going on, or is supposed to be going on in the following simple program:

#define _USEANSI_

#ifdef _USEANSI_
 #include <iostream>
#else
 #include <iostream.h>
#endif

void main()
{
 char buf[80];
 cout << "Enter a line of text :";
 cin.getline( buf, sizeof(buf) );
  cout << buf << endl;
}

Here's the problem: When _USEANSI_ is defined, I have to hit the Enter key twice before the second cout insertion is executed. When the define for _USEANSI_ is commented out and <iostream.h> is included instead of <iostream>, the program works as I would have expected it to.

Thanks,
George T. Cottrell
george_c@ix.netcom.com

Looks like you've tripped acrosss a bug in basic_istream::getline, one not previously reported. The code insists on peeking at the code following the newline when it doesn't have to. As a result, it can't proceed until you've typed farther than you'd like. The fix is a one-line change to <istream>, where getline detects the delimiter:

else if (_C == _Di)
 {++_Chcount;
 rdbuf()->sbumpc(); // was snextc()
 break; }

Thanks for reporting the problem. — pjp