Letters to the editor may be sent via email to cujed@mfi.com, or via the postal service to Letters to the Editor, C/C++ Users Journal, 1601 W. 23rd St., Ste 200, Lawrence, KS 66046-2700.
Editor,
Have you gone mad? The January 1999 issue looks more like another Java programming magazine. Last time I checked your magazine was called C/C++ Users Journal.
First let me say that I am not a language bigot. I use, and recommend to others as well, to use whatever language is best for accomplishing your task at hand. For me that is usually C and C++. I don't have a problem with Java, and I wish it a long and prosperous future.
But, I do have a problem paying for Java content in my monthly dose of what used to be good C and C++ information. I read Windows NT Magazine, and I don't expect to find a ton of Linux information there. Sure, an occasional article on Samba or something related to making them work together is fine, but I don't want to read a lot about Linux there. Likewise, I subscribe to Linux magazines too, and I don't want to see Windows NT information there. I want to get what I paid for.
Your magazine has consistently been excellent over the years. Until now, I never had a problem with it. You managed to balance Windows/DOS/Unix/Generic C and C++ programming pretty well. January 1999 is a whole different matter.
Plenty has been written on Java in the last three years. Too much if you ask me. There are more books than you could fit in the Library of Congress, enough magazines to make your head spin. Sure, I recognize that a lot of C/C++ programmers also know Java, and vice-versa. Even I am starting to use it. This does not negate the fact that I expect to find C and C++ programming covered, almost exclusively, in a magazine with a name like yours.
Now, if you want to rename the magazine C/C++/Java Developers Journal, fine. You did that a couple of years ago when you added C++. Please add Java content though, don't detract from the C/C++ content to make space for Java. Or better yet, just start a new magazine devoted to Java development, or Java/C/C++ mixing.
Whatever, I just wanted to vent my frustration with the total lack of proper content in the January issue. I have always enjoyed your magazine, and I hope to continue to in the future, but the January issue was just filed in the Recycle Bin.
Todd Osborne
Senior C/C++ Developer
FMStrategies, Inc.
Dear CUJ,
I am so sad to see the cover of the January 1999 issue. If I had wanted to subscribe to a Java magazine, I surely would not have sent you my hard earned money, which by the way I earn by writing C++ code.
If you intend on writing any more Java columns, please send me my money back.
Steven Woolgar
Our primary focus, when discussing Java, is to show how it mixes with C and C++ programming. But it is solidly in the C family, and it is a topic of considerable interest these days. Thus, the Java theme for the issue. I personally resisted the change of name a few years ago, arguing in part that it was one step down a slippery slope. I too want us to restrict our reporting to the C community, but I hope we don't have to be too slavish about making the name of the magazine match the current mix of dialects that make up that community. That's my two cents worth. I defer to our Editor-in-Chief for a more detailed reply. pjp
Like all trade publications, we live or die by our ability to define and serve a cohesive audience. So ultimately our coverage of Java will depend on how C/C++ programmers use it as a language, and how much they come to view it as their own. Changing our name to C/C++ Users Journal was easy because our readers had already accepted C++ as a natural member of the "C family" Plauger mentioned above. (With all due respect to Plauger, I was one of those who wanted us to change our name. Too many people thought The C Users Journal covered only C!) Today it is clear that C/C++ programmers have not yet accepted Java into the family. So neither flooding CUJ with Java content nor changing our name would be very wise.
As a fellow reader of magazines, I can identify with the desire expressed to get what you pay for. I'm sure, though, that one of the things you wish to pay for is not a case of tunnel vision! Java is out there, and our readers need to consider it in all its glory and infamy. We think that in the next few years most of our readers will be using Java in some form, as well as C and C++. As they do so, they are apt to see more similarities than differences. And with Java's performance being steadily improved, those differences are only shrinking. Readers may well come to ask with justification what we are trying to hide by not publishing any articles on Java.
We don't want to hide anything, either Java's many disappointments as a virtual platform, or its attractiveness as a general-purpose programming language akin to C and C++. It is the latter we find most intriguing, and we are especially encouraged to consider Java from this angle when companies like IBM are developing Java-to-native compilers. An object-oriented language that looks a lot like C++, has fewer hidden landmines, and potentially performs just as well. Should we just ignore it? I don't think so. Hence the bimonthly column on Java.
My final word is, relax, we're not going to go nuts on Java. Where Java goes at this point is still anybody's guess. I just hope you both stick around to see what happens. mb
Dear Sirs,
There is a major bug in Radoslav Getov's erase functions in his pvector class in your January 1999 issue. When an erase function reduces the size of a pvector to half its capacity, it reallocates the memory used by the pvector by using the copy constructor followed by a swap. This results in a new pvector with just enough capacity to hold the objects in it. This presents a huge problem when several inserts and deletes occur at the critical location, as in a stack. If the size of the pvector is initially 100 and its capacity is 200, an erase will cause a reallocation, with the size and capacity both reduced to 99. If the next operation on the pvector is an insertion, the pvector will have to be reallocated again, with its capacity increased to approximately 200 in most STL implementations. Thus, an unfortunate combination of insertions and deletions could easily result in quadratic behavior.
The solution to this, of course, is to always reallocate in such a way that the capacity of the pvector is always greater than the size. For instance, you might reallocate when the size of the pvector is one fourth of the capacity but only reduce the capacity by one half.
Joe Gottman
Impact Technologies
JGottman@impact-tech.comRadislav Getov replies:
Dear Joe,
You are right. I didn't think about this aspect of reallocating. However, as you might guess, it's somewhat a matter of 'bad luck' (i.e., not very probable) to come to such a situation. It will only happen rarely (on the order of log(n)/n times). In most cases no reallocation will happen upon erasing.
But, regardless of this, it can happen. And if it can, it will :-)
According to your proposal the pvector's erase function, which was:
iterator erase_ (iterator it) { _BaseType::erase (it._getBaseIt()); if (2 * size() < capacity()) { // deallocate some storage pvector copy (*this); swap (copy); } return it; }might look like this:
iterator erase (iterator it) { _BaseType::erase (it._getBaseIt()); // 1/4th of capacity if (4 * size() < capacity()) { //1/2 of capacity pvector vcopy (2 * size()); std::copy (begin(), end(), vcopy); swap (vcopy); } return from; }The same must probably be done with the other erase function (those with two arguments) as well.
Thanks for the hint.
Sincerely yours
Radoslav Getov
Dear CUJ,
Re-cryptography: "Stacking up a pile of strengthening techniques on the basic algorithms will never catapult it into the big leagues of Blowfish or IDEA, but may give developers the mistaken impression that they have the Maginot line in their back pockets. I would prefer to see the technique in the article as a workman's tool, not as (say) an advanced military defense." (Warren Ward, Letter, page 98, C/C++ Users Journal, December, 1998). To those of us who survived World War II, of course, the Maginot Line is forever remembered as the epitome of static vulnerability and false security.
First, the Maginot Line was not quite long enough, and the Germans simply crossed the borders at the unprotected north and south ends. Second, the built-in heavy artillery pointed towards Germany and was unable to swivel towards France, which was precisely where the invaders were gathering. Finally, the Maginot Line (started in 1926) represented a static defense strategy (a sort of concrete version of the 1914-18 trenches) that failed to predict Hitler's blitzkrieg.
PAX etc.,
Stan Kelly-Bootle
And you didn't say a word about the horribly mixed metaphor. pjp
Dear CUJ,
re: Mike Pickhardt's letter in the December 1998 issue of CUJ, concerning the time_t problem. Mike mentions that the Standard C and C++ <time.h> library functions use a 32-bit signed long to represent time variables, and thus suffer from a fairly short useful lifetime. I'd like to clear up a few common misconceptions about this.
The "time variable" he refers to is the time_t type defined in the <time.h> header. He might be surprised to learn that the ISO C Standard makes very few requirements for this type. For example, it does not mandate what type it must be, other than it must be an arithmetic type. So it could, for example, be implemented as a float.
The C Standard also does not mandate to what precision the time_t type is capable of representing time. One implementation could encode time to the nearest second, another could encode it to the nearest microsecond, and yet another could encode it to the nearest day. (The mktime and localtime functions seem to imply that times are stored to the nearest second, but that is never explicitly stated in the standard.)
The C Standard also does not mandate a range of values for the time type. An implementation could do its best to encode times across a reasonably large span of years, or it could handle times spanning only a single day. A 32-bit signed integer encoding individual seconds is capable of handling times over a span of 68 years, while a 32-bit signed integer that encodes the number of days from a given date spans dates of almost 6 million years.
The specific encoding that Mike refers to is the most common implementation for time_t, the POSIX specification. In actual fact, the POSIX specification does not state a size for the type, only that it must be a signed integer. Most POSIX implementations use a 32-bit int, and they typically make time_t an int. POSIX implementations are free, though, to use a longer type, such as a 64-bit long, if they want.
Mike mentions that 32-bit signed integer time_t values will "run out of bits" in Jan 2038, and he's right. But that only makes it a problem for POSIX (and POSIX-like) systems that use that one particular data type for time_t. Systems using 64-bit integers, or systems using a floating-point type, or even some other extended type, will have different "end of epoch" dates. Some systems won't run out of bits for thousands of years.
Microsoft Windows supports a 32-bit signed integer time_t type for ISO compatibility, but it uses a 64-bit integer type for its internal system time and file-system timestamps. This larger time type encodes time to the nearest 0.1 microsecond "tick" and spans a range of over 29,000 years. (This time encoding is almost identical to, and was apparently derived from, the encoding used by the Digital VMS operating system.)
The fact that the ISO C Standard does not guarantee any kind of minimum precision or range has been noticed by several people. A few public observers, including me, have issued proposals and public comments to the ISO C9X committee in the hope that the next version of the C Standard will provide better semantics for the time types and functions. I personally would like to see a guaranteed minimum range and precision for time values and standard methods for determining what those values are. I would also like to see the tm structure and the strftime function enhanced to allow for subsecond precision of times, at least down to milliseconds. We will see if any of these ideas pan out.
As far as the time problem for POSIX mentioned by Mike, I expect that the POSIX time_t type will be extended within the next 30 years to be an integer type of at least 64 bits. This will solve the bit width problem, but it will create a new problem of backwards compatibility with old data. On the other hand, someone might come up with an entirely new solution that completely replaces the current time_t datatype in favor of a better one. It's anyone's guess at this time.
David R. Tribble
dtribble@technologist.comThanks for teasing apart the different aspects of the problem(s) with representing times. I too am willing to believe that the 2038 problem will succumb to multiple software updates between now and then. But then, I'm not selling my time as a consultant on Y2K problems. pjp
Dear CUJ,
First of all, let me tell you how much I enjoyed and benefitted from Chuck Allison's article "What's New in Standard C++" in your December 1998 issue. Coming from a C background, I've never really gotten around to using the Standard C++ Library. I've only gone as far as defining classes (and everything that comes with them) but most other stuff in my codes are actually "C." Mr. Allison's article was just what I needed.
I have a couple of questions, though, which I would like to ask him:
1. In the "Partial Specialization" section, the following code snippet appears:
template<class U> class A<int, T> { ... }Shouldn't the second line read:
class A<int, U>instead?
2. Also in the same section (toward the end), he gave sample code to avoid "code bloat." He began by fully specializing on void * and then partially specializing on T * and making it derive privately from the full void * specialization. My question is, are those steps really required to make all pointer specializations use the T * specialization? Is it a fixed formula or is it just one of several ways of doing it? Can't we just partially specialize on T * and achieve the same thing?
Thanks. Hope to receive your reply.
Ever A. Olano
Chuck Allison replies:
My answer to no. 1 is, yes indeed. Someone else caught this and told me. It will be corrected in the version I'll post to my web site in March.
In answer to question no. 2, the steps I listed are necessary. The purpose of this idiom is to share a common implementation (to avoid code bloat, as mentioned), so we have to establish that common implementation first. Then any pointer instantiation (except void *, of course) will use the T * partial specialization, which in turn just forwards all the work to the void * implementation. You will find a more thorough explanation of this canonical example in Bjarne Stroustrup's 3rd Edition of The C++ Programming Language under "Partial Specialization." Thanks for writing.
Chuck Allison
(The Harmonious CodeSmith)
Consulting Editor, C/C++ Users Journal
cda@freshsources.com