Letters to the editor may be sent via email to cujed@mfi.com, or via the postal service to Letters to the Editor, C/C++ Users Journal, 1601 W. 23rd St., Ste 200, Lawrence, KS 66046-2700.
Dear CUJ,
Andrei Alexandrescu is asking for better template error messages in the March 1999 issue. Asking for better compilers is a long-term solution.
I think there is a short-term technique to better the compiler message with your present compilers using class instead of typedef.
Here is Alexandrescu's code:
typedef map<string, double, less<string> > TDblMap; typedef vector<TDblMap> TDblMatrix; void f(const TDblMatrix &M) { M[0] = M[1]; }This is the message that I get when I try to compile Alexandrescu's code with Borland C++Builder 1:
Project1.cpp(22): Non-const function std::map<string,double,std::less<string>>::
operator=(const std::map<string,double, std::less
<string>>&) called for const objectUsing class instead of typedef, you get more verbose code to write:
class CDblMap : public map<string, double, less<string> > { public: CDblMap() : map<string, double, less<string> >() {} }; class CDblMatrix : public vector<CDblMap> { public: CDblMatrix() : vector<CDblMap>() {} }; void using_class_f(const CDblMatrix &M) { M[0] = M[1]; }But you get a simpler error message to read:
Project1.cpp(40): Non-const function CDblMap::operator =(const CDblMap &) called for const object
You have to type some extra lines of code to use this technique, and it doesn't work to solve all the problems listed in Alexandrescu's article. But you can use this idea right now.
Is there any big drawback I am missing?
Pascal Lecourt
Paris
lecourtpas@aol.comAndrei Alexandrescu replies:
Pascal,
Thanks for your follow-up and insights. Sorry for the late answer seems like my server has had a problem. In my humble opinion, there are some drawbacks associated with inheritance when used as a solution to long names.
From a pragmatic point of view, there are classes that are not supposed to be derived from. Standard containers and the standard string class are in this category. Deriving from them would be a dangerous and confusing technique that does not scale to team projects. Furthermore, you'll have to reproduce all the constructors in the original class (and they may be quite a few) if you want derivation to behave almost like a typedef. And finally, because you use public inheritance, someone may upcast implicitly to the base class and destroy your derived type via a pointer to a base type, which has undefined results if the base class destructor is not virtual. (This is arguably harmless in your case, but it's still not 100% portable.)
From a conceptual point of view, there is a big difference between typedef and derivation. Typedef introduces an alias for an existing type. Derivation introduces a new type, which brings heavy intellectual baggage with it.
For these reasons, I tend to advise against using this solution. You solve a problem with means that are intended to solve a very different category of problems. For people, typedefing templates is a means to reduce typing and to increase clarity. Deriving from classes is quite a different (and heavier) task.
Again, thanks for your email. I look forward to continuing this discussion.
Best regards,
Andrei
Dear CUJ,
I enjoyed Eric Roe's "A Wrapper Class for Dynamically Linked Plug-Ins" (CUJ, May 1999) but it could stand a little tweaking.
1. The wrapping functions in SimpleMath.cpp are a bit tedious and repetitive. To quote Dan Saks, "In C++, the common cure for tedium is a template." So add to class DynamicLibrary in DynamicLibrary.h:
... protected: template<class T1, class T2> int DynamicCall_2_int (const char *pcName, int iCache, T1 arg1, T2 arg2) { int(*f)(T1, T2) = reinterpret_cast<int(*)(T1,T2)> ( GetProcAddrCached (pcName, iCache) ); if (f) return f(arg1, arg2); return -1; }and then the wrappers in SimpleMath.cpp become much simpler. For example:
int SimpleMath::Add(int x, int y) { return DynamicCall_2_int ( "simple_math_add", SIMPLEMATH_ADD, x, y ); }2. The defined constants like SIMPLEMATH_ADD in SimpleMath.cpp seem inelegant; an enum would give better type-checking possibilities. More seriously, when there's lots of similar functions to define, like:
DynamicCall_2_int("simple_math_add", SIMPLEMATH_ADD, x, y); DynamicCall_2_int("simple_math_sub", SIMPLEMATH_SUB, x, y);it's all too easy to do incomplete editing while copy/pasting, which leads to things like:
DynamicCall_2_int("simple_math_sub", SIMPLEMATH_ADD, x, y);which should be SIMPLEMATH_SUB.
The solution is to harness the macro preprocessor so that the enum constant is derived from simple_math_sub as well. Replace:
#define SIMPLEMATH_ADD 0 #define SIMPLEMATH_SUB 1 ...with
enum eCallCacheIndex { CONST_simple_math_add =0, CONST_simple_math_sub }and add a macro to use the function name as the basis for the eCallCacheIndex value:
#define DYNAMIC_CALL_2_INT( \ FUNCTION_NAME, ARG1, ARG2) \ DynamicCall_2_int \ ( \ #FUNCTION_NAME, \ CONST_##FUNCTION_NAME, \ ARG1, ARG2 \ )And then you get the even simpler wrapper:
int SimpleMath::Add(int x, int y) { return DYNAMIC_CALL_2_INT ( simple_math_add, x, y ); }3. To compile with VC6 I had to change DynamicLibrary.h from using HANDLE to HMODULE.
I attach the amended files.
Bill Rivers
Skyline Software Ltd
brivers@telinco.co.ukThanks. mb
Dear Sir,
I was rather disheartened to read the letter, and then the reply, in June's letters page regarding the March Issue being full of Java. The thing that caused me to sigh were the comments regarding Visual Basic. If these were made purely on the factual grounds of Visual Basic not being related to C++, then so be it. On the other hand, if they were made, which, reading between the lines I think they might have been, through some prejudicial childishness, then shame on you.
Speaking as one who has programmed in Assembler, C, and C++ (since Windows 1 to the present day) in fact you couldn't get a greater fan of 'C'/C++ than me I must say that, yes, it is true, Visual Basic is not C++, nor, unlike C/C++, is it often misapplied both of the latter are fundamentally Systems languages (read, "inherently dangerous") which are now commonly used to write UI/IO/COM/Disk/Network/Printer-bound Windows apps! What a waste of time is the attraction that it is simply more difficult or low-level?
It may interest you to know that our company usually designs and then prototypes in Visual Basic prior to using any other language. We then profile the code to see whether or not it is acceptable in terms of performance, etc. If it is, we leave it alone. If it is not, in true Jon Bentley style, we do some back of the envelope calculations to see how 'C'/C++ might speed up the code. Interestingly, we sometimes find that it would in fact slow it down, especially where there are a great many operations on strings to perform. Most of the time however, we find that any performance increase is so small as to render it insignificant "the clock cycles live elsewhere," in other words. If we need real speed we either work at improving the design/algorithm (normally the place where the best improvements are to be made) or go straight to the assembler. (BTW, I note that the ASM Journal has also stated "when pigs fly" regarding articles on C and C++!)
No, most of the time, Visual Basic works for us as it does for most the world's developer community. It delivers (quickly) robust applications that are maintainable, scalable, and cheap to produce. One can now use Visual Basic to write shared and efficient COM components, OCX controls (for Visual C/C++ and Visual Basic developers) NT Services, add-ons to Outlook, extensions to Excel, Exchange, Word, PowerPoint, Visio (just to show this list isn't limited to MS apps), Windows CE, etc. Please don't foster or propagate unnecessary languagist views (if there is such a term) all languages have something to offer and surely, a mixed language approach is one of the best ways to go about any OO programming today this is especially true today as Windows defines the linkage mechanism and COM isolates the machine architecture. C++ is great in places, Visual Basic is too neither should be misapplied and both should be shown the same level of respect.
Long live Visual Basic I say.
Peter Morris
www.TheMandelbrotSet.com
v-peetm@microsoft.comMy precondition for a VB theme issue in CUJ, namely that pigs fly, was 80% based on VB's lack of similarity to C/C++. But, yeah, you're right the other 20% was an underhanded snipe at VB. I apologize for promoting a "languagist" (great term) attitude. In fact, I think I share your values. Over the years I have learned that doing it the hard way just to prove how smart you are only proves the opposite.
I note, however, if I have been guilty of languagism, VB must be held guilty of "platformism." Show me another OS but Windows on which VB will run. As for ASM Journal, I find it heartening and kind of amusing that another magazine would be as vehement about C/C++ articles as we are about VB articles. Sounds like their editors are just doing their jobs. mb
Dear pjp,
I know I am late, and I know that proposing updates to the C++ Standard now is silly (the standard has been approved, and a standard is useful if it doesn't keep changing) but I would like your opinion on what I think would be a simple and useful new keyword in C++. Consider this program:
class Base { public: virtual void Foo(int i) {cout<<"Base"<<i;} }; class Derived : public Base { public: virtual void Foo(long i) {cout<<"Derived"<<i;} };The intention of the writer of class Derived was to redefine Base::Foo, but he got the type of the argument wrong (the same would happen if he got the number of arguments wrong). My compiler, Visual C++ 6.0 with maximum warning level, stays absolutely silent, but the following code:
Base * p = new Derived; p->Foo(1L);will happily call Base::Foo, which is obviously not the intention of the programmer (note that int and long are both 32-bit integral types on my machine). A better compiler could give a warning, but would still not catch the following problem:
class Base { public: virtual void Foo() {cout<<"Base";} }; class Derived : public Base { public: virtual void foo() {cout<<"Derived";} };Here Derived::foo was typed with a lowercase f', and the compiler took it as a new virtual function. The bad part is that both Foo and foo will work, but not as the programmer intended. These problems often arise when the writer of a base class modifies the signature of a virtual function, and the users who inherited from such base classes will have to go crazy to find out that their redefined virtual functions silently don't get called anymore.
My suggestion is this: since specifying virtual in a redefined virtual function is useless (except for style issues) as it's virtual anyway, why not introduce a new keyword, redefine, to be used thus:
class Derived : public Base { public: redefine void foo() {cout<<"Derived";} };and let the compiler check that, since the intention of the programmer was to redefine a virtual function in some base class, the signature of the function is actually correctly matching a virtual function in some base class. This would not create any compatibility problems, as not using redefine would work like it does now. To adapt code that uses redefine to work with a compiler that does not support it would just require one to use a simple macro:
#define redefine virtualwhich is a minor compromise. This new keyword would let the compiler make some easy checks that would enforce the intentions of the programmer, which is the whole point of static type checking, a feature that has helped find millions of bugs early in the development process in languages that support it. The more static checks the compiler can do to enforce the intentions of the programmer, the better, is it not so?
I'd like to know your opinion.
Regards,
Giovanni Bavestrelli
Well, you sucked me in. After your opening sentence, I began framing the usual pedantic warning against invention and fiddling with a standard that's supposed to remain stable. But by the end, I started giving serious thought to using redefine (or the uglier _REDEFINE out of necessity) as a way to document my intention when writing derived classes. I generally oppose warning messages for a host of philosophical and practical reasons, but this problem has bit me too often, in too many forms, to ignore. If I could convince any compiler vendor to add redefinition checking, it would be as an error, not as a warning. Thanks for the interesting thought. pjp
Dear Mr. Plauger,
After reading your article, "A Better Red-Black Tree," I was disappointed to read the fact that you have not called the 2-3-4 tree by its proper name. In the section "A Little Imbalance" you begin a description of red-black trees with, "A red-black tree is a balanced tree that has three kinds of nodes." The description that follows is one of a 2-3-4 tree and not a red-black tree which you properly describe later. It is true that you can view a red-black tree as a 2-3-4 tree, but the two have different implementations.
If you would like another reference book for binary trees and other good data structures, I have used Introduction to Algorithms by Cormen, Leiserson, and Rivest. I have used this book to implement a couple complex data structures such as B-tree and Fibonacci heaps. I should have a red-black tree implemented, but I can't find the code right now.
This next section is more about binary trees on modern processors than it is about your article. Now that most processors cache data, I find that it will be faster to to use larger amounts of data if it reduces the number of memory fetches I have to do. Now that the cache has become so large, it is actually advantageous to use 256- or 512-byte node sizes than it is to use a 16-byte node size (assuming 32-bit pointers), which most people have to use to implement a binary tree. Each processor will have a sweet spot usually around the size of an L2 cache line.
I was also disappointed that you recently ran an article talking about Skiplists. I never wrote a response to that article, but Skiplists are a poor man's binary tree, and no matter what others say, a Skiplist will perform both worse and use more memory than a binary tree, or a B-Tree.
I have code on http://resnet.uoregon.edu/~gurney_j/jmpc/ that implements Skiplists, B-trees, and Fibonacci heaps. Each of these includes code that both tests validity and performance of the algorithms.
John-Mark Gurney
Cu NetworkingI have a limited space budget for each column, so I often have to pare details. A red-black tree is indeed isomorphic with a 2-3-4 tree, as Nelson's book points out quite clearly. (It was in my list of references.) As for the binary trees, skiplists, etc., I tend to be ecumenical. Over the years, I have learned that practically every nontrivial way of structuring data has its benefits in some context. The more structures and algorithms you become familiar with, the more likely you'll know which tool to bring to bear on a given problem. Thanks for the pointer to your code. pjp