Dear DDJ,
In his article "Comparing C/C++ Compilers" (DDJ, September 1995), Tim Parker states: "Performance-wise, the GNU C compiler is unremarkable. It was about average in speed tests during our first trials." Fair enough, I suppose. But was he referring to compilation speed, or speed of the executed code?
He continues: "It has no optimizing capability, so we spent a couple of days playing with flags and options. Eventually, we managed to find several useful tweaks that improved performance a little, but the compiler still didn't win any prizes."
Bzzzt. The only thing I can imagine is that Mr. Parker completely neglected to read either the man page or the (very thorough) TeXinfo documentation for gcc. Here's an excerpt from the gcc man page:
Optimization Options
-fcaller-saves -fcse-follow-jumps-fcse-skip-blocks -fdelayed-branch -felide-constructors -fexpensive-optimizations -ffast-math
-ffloat-store -fforce-addr -fforce-mem -finline
-functions -fkeep-inline-functions -fmemoize
-lookups -fno-default-inline -fno-defer-pop -fno-function-cse -fno-inline -fno
-peephole -fomit-frame-pointer -frerun-cse
-after-loop -fschedule-insns -fschedule-insns2 -fstrength-reduce -fthread-jumps
-funroll-all-loops -funroll-loops -O -O2
And those are just the vanilla optimizations. There are also a few language extensions that can improve efficiency. For example, first-class labels, which can allow direct threading in interpreters (similar to continuation-passing style) or computed jumps. Also, specification of return-value destination, which is vaguely like the placement option in C++'s new operator.
I can't comment much on the quality of generated code, but I do know that, for speed of compilation, GCC 2.5.8 under Linux is noticeably faster than Watcom C++ 10.0 under DOS (the same 486/66 for both). And the Watcom compiler is fast.
Todd Eigenschink
eigenstr@CS.Rose-Hulman.Edu
Tim responds: Thanks for your note, Todd. Indeed, I did read the GCC man pages and all other accompanying documentation you mention. My point in the article was that, although you can set optimization through careful use of options and flags, there is no "optimize" mode as with almost every other compiler. I didn't offer my opinion on whether or not this is good or bad--just that it was missing. As for speed, I double-checked my benchmarks several times, and the results are applicable for both compilation and execution.
Dear DDJ,
In his September 1995 "C Programming" column, Al Stevens wrote that "MFC is not only the de facto standard Windows framework, it's the best one available." Ugh!
I've been using Borland C++ and OWL for a few years, so I'll make comparisons directly between MFC and OWL. I will not say that OWL is the best Windows framework. I think it's very good, but I suspect that some of the commercial frameworks (zApp, Zinc, and the like) are even better.
On the other hand, there are a lot of things to like about Visual C++ 2.x. I suspect that when combined with a good third-party class framework, it would be hard to beat as a development platform.
Jim King
jim.king@mail.sstar.com
Dear DDJ,
I take exception to Al Stevens' opinion of OWL versus MFC ("C Programming" September 1995). Not only do I feel that OWL is superior to MFC in every way, shape, and form, but I also don't think that de facto standards are determined by fiat or by the massive amounts of propaganda for MFC that Microsoft has created to promote its own Windows application framework. I have found that there is hardly a single area of Windows programming that is not easier to implement using OWL than MFC, and I do not want to use an interface that makes my job harder and programming less fun. Why don't we let programmers decide what they like rather than attempt to coerce them into using a supposedly more popular platform? Just because it is Microsoft does not make it the best, and in this case, OWL does a magnificent job of creating an application framework while MFC is only a slightly better than mediocre implementation. If you like being as close as possible to the Windows SDK, you might want to stick with MFC, but if you want to do real object-oriented programming with tremendous flexibility and extensibility, OWL is the clear choice.
Edward Diener
70304.2632@compuserve.com
Al responds: I agree with Edward when he asserts that the de facto standards should not and cannot be created by decree. They occur naturally as the result of the wide acceptance and usage of a convention by the practitioners in an industry. MFC passes that test. Most compilers have licensed MFC, most programmers prefer it, and most Windows-programming employment opportunities require it.
Edward and I have different opinions about what we prefer in a framework class library. Both libraries have their technical strengths and weaknesses. I prefer MFC's close-to-the-bones approach over OWL's would-be object-oriented shroud. A translucent veil rather than a blackout curtain. My opinion is, I think, more mainstream than his. I did not, however, mean to suggest that he and other OWL users should be forced to change. But I am flattered that he thinks that my influence could coerce programmers to do anything they don't want to do.
Dear DDJ,
Jonathan Erickson's "Editorial" about PNG (DDJ, September 1995) claims that "PNG is free and open, and available for use without fear of patent infringement." How is it possible to write a new and useful program of significance without fear of patent infringement? That is, how can you know "there are no patents associated with PNG"? Has someone reviewed all existing patents to ensure that PNG does not infringe? Even if this complete review of all patents could be done, isn't this process subject to interpretation? What about patent applications that are being reviewed at PTO and may be issued in the future?
While it may be true that the inventors of PNG have not filed any patents, isn't it possible that PNG could still infringe on one or more existing or future patents? Isn't it possible that several years from now, PNG could be in a state similar to that of GIF today?
Christopher Glaeser
cdg@nullstone.com
Dear DDJ,
I have concerns over Jonathan Erickson's "Editorial" on the Satan program (DDJ, June 1995).
I installed and used Satan at my last job (at a university) and think that Satan has gotten a bad rap for no real reason at all. Satan has come under fire by the press, users, and system administrators alike. I just do not see what the big problem is.
Contrary to (popular?) belief, Satan does not use any kind of magic to test a system. In fact, to my knowledge, all of the so-called holes that it scans for are documented, and can be found by anyone looking in the right places. Holes that are discovered are posted to the Internet on a regular basis by CERT (if memory serves correctly).
Anyone can find out this information by doing some footwork. I find that most of the people who are having problems with Satan are the system administrators that are not doing their job (IMHO) and are bitching about it.
Use of Satan is not a concern if you do not have the holes that it looks for (all the more reason to have Satan test your system before someone else does). If people take the time to research the program, and the holes that it looks for, they will find that it is not as bad as it seems. I believe that this is one case where word-of-mouth just got outta hand.
James R. Twine
SJMR66B@prodigy.com
Dear DDJ,
Having written extensive thunking libraries for the x86 under OS/2 (refer to CompuServe's OS2DF1 library 9), I wish to warn other developers about some pitfalls which exist in the segmented x86 architecture when going from 32-bit to 16-bit code. Because many of the 32-bit API calls in Windows 95 use thunks back to underlying 16-bit code, these x86 architectural problems may cause even flawlessly written application code which calls the 32-bit APIs to encounter problems.
Thunking, for those not familiar with it, is the process of changing an x86 processor between the flat model of 32-bit protected mode and the 16:16 model of 16-bit protected mode. In the 32-bit flat model, stack, code, and data are addressed as simple linear 32-bit offsets from a starting position. In the 16-bit protected mode, all of these entities are addressed as a 16-bit segment selector and a 16-bit offset from the start of the selector. In 16-bit mode, the x86 segment can be no more than 64K in size. When thunking, you convert between these two addressing schemes. Under both OS/2 and Windows 95, the 16-bit selectors are tiled; that is, where one 16-bit selector ends, the next one starts. In other words, the 16-bit selectors are not overlapped. Not only does this maximize the memory space available to the 16-bit mode, it prevents a nasty block-move overlap bug from occurring. This scheme allows 16-bit code access to any individual byte of data available to the 32-bit code.
Notice the careful wording of the last sentence: I talked about "individual" bytes, saying nothing about "arrays" of such bytes. If an array in 32-bit mode crosses one of the 64K boundaries between the tiled 16-bit selectors, it is not accessible to underlying 16-bit code as a contiguous array. The physical address of an array in 32-bit memory depends not only on what variables precede it in your program, but also on what other programs are running. The probability of a thunked API call failing randomly is given by the equation: ((size of passed array-1)/65,536. If array is one byte in length, the probability of failure is zero; if array is 65,536 bytes long, only the one correct alignment can succeed. Any array larger than 65,536 bytes in length is guaranteed to fail. This problem is particularly insidious for any passed array existing in application-program heap space--which may have a variable location, depending on the options selected during the operation history of the program. The result of this thunking problem is that even an application program which contains no bugs can fail in a nonrepeatable, intermittent fashion.
The stability of OS/2 jumped dramatically between revision 2.1 and 3.0 when the entire graphics system was rewritten in 32-bit code. Thunked APIs are a classic example of something which almost works. It has become difficult enough to write stable, usable applications without the operating system introducing intermittent problems. Unless--and until--Microsoft demonstrates to the developer community that it has a stable, reentrant solution to this array-boundary condition problem built into its thunking routines, the release of Windows 95 is, in my opinion, inherently unstable, premature, and unacceptable.
Bob Canup
73513.216@compuserve.com
Dear DDJ,
In his column "Apple Talks the Talk and Walks the Dog at WWDC" (DDJ, "Programming Paradigms," August 1995), Michael Swaine attributes SuperCard to Silicon Graphics. The company that developed and distributes SuperCard is Silicon Beach Software, makers of SuperPaint. It's my understanding that Silicon Beach has no connection with Silicon Graphics.
Jack Herrington
jackh@axonet.com.au
DDJ responds: Right you are, Jack. Thanks.
Copyright © 1995, Dr. Dobb's Journal