Letters

Dr. Dobb's Journal September 1999

Real Programmers Hate Cobol

Dear DDJ,

In his Online Op-Ed "Real Programmers Hate Cobol" (http://www.ddj .com/oped/ 1999/hend.htm), Philip Hendrickson mentions the "Wolff programming aptitude test." I'd be interested in knowing more about this test. Is there any information about it on the Internet (or elsewhere) to which you can refer me? (I did a variety of searches using AltaVista and turned up nothing.) Thanks for any information you can provide!

Brian Harrison

Philip responds: Thanks for your question, Brian. The Wolff programming test is an exam typically given by an employer to IT job applicants to assess their general programming aptitude. Correctly answering the questions requires logical problem solving skills, focus, and thorough attention to detail. It does not require knowledge of any particular programming language or environment. This is not a test of programming syntax or coding style. It is designed to determine how a person approaches logic problems, thus exposing the aptitude that person has for learning and mastering programming skills. My employer gets the tests from a testing firm out of New Jersey -- Walden Personnel Testing and Training. Walden doesn't use the original Wolff test, but has developed a variety of tests based on the Wolff model. Walden has also worked with us to develop custom aptitude tests for other IT areas, such as microcomputer support. Last I heard, each test costs in excess of $100.00, which includes scoring and analysis by Walden after the test is taken. So we only test applicants we are seriously considering hiring. The results of these tests help identify people with good potential, even if they don't yet possess the skills we are seeking. The tests also help weed out applicants who have an impressive resume but don't have the aptitude to excel in the field. For more information, contact Walden Testing at 800-361-4908 or http://www.waldentesting.com/.

Dear DDJ,

Philip Hendrickson's Online Op-Ed "Real Programmers Hate Cobol" (http://www.ddj .com/oped/1999/hend.htm) was a great article! I, too, am not a real programmer, although I don't excel in Cobol. There's something to be said for those "programmers" who maintain older applications and are not concerned with the cutting edge of technology. It's true that head hunters consider those not with the "in-tools" as limiting their careers, as if that is something wrong. I keep up with the slang catch words, but let others pave the way for me.

I may not be like Lewis and Clark, but I'll take the wagon roads after they've been paved. BTW, I believe that dinosaurs still exist -- they are just hiding out.

Murel Warren, Jr.

Warren@lcms.org

1984

Dear DDJ,

1984 was quite an interesting year for Michael Swaine to discuss in his July 1999 "Programming Paradigms." Doubtless, a great many readers will write in with their own favorite contributions which they feel Michael missed. Here's mine. In 1984, Guy Steele first published Common Lisp: The Language (Digital Press), which described a new convergent standardized form of Lisp. Sixteen years later, NASA's most advanced spacecraft control experiment yet is built using Common Lisp (see http://www.harlequin.com/news/press/ devtools_0599.html). If you're doing software R&D, it's the ultimate tool.

Jason Trenouth

jason@harlequin.co.uk

Hilbert Curves

Dear DDJ,

In the July "Algorithm Alley" column, Ron Gutman did an excellent job with one of my favorite subjects -- the Hilbert Curve (DDJ, July 1999). In particular, I'm sure his insight into quadtree depth and range list complexity will be helpful to many. However, I believe that one of his assertions is incorrect; the cell numbering system does not need to take into account the quadtree depth. The cell number can be represented as a fraction between zero and one. Each quadrant at a succeeding quadtree depth can be represented by a base-4 digit and these digits can be strung together to form the fraction (for example, "45" in Figure 3(c), page 118, translates to .231 base 4 or 0.703125 decimal). This algorithm adds precision to the least-significant digits, preserving order independent of depth. The resulting numbers' most compact representation would be as fixed-point fractions; the implementation would entail storing them as unsigned integers shifted left by the number of bits in the integer. Ron's numbering scheme arrives at the same representation; my comments are at best an interesting footnote and at worst, overly picky. Good job Ron.

Lee Kamentsky

LeeK1@mediaone.net

Ron responds: Lee, I'm very glad you liked the article. My point about Hilbert numbering and quadtree depth was that you have to decide in advance how many bits of precision you are going use and the number of bits must accommodate the maximum quadtree depth. A 32-bit integer can deal with no more than 16 quadtree levels. Of course, if you are using 32-bit integers, it makes sense to use all 32-bits to get the maximum quadtree depth (16) that 32 bits can give you.

Grepping and Globbing

Dear DDJ,

I read Al Stevens's May 1999 "C Programming" column about filename globbing in grep. I have published some reusable code that does quite a good job of bringing globbing to MS-DOS. It permits (slightly) more sophisticated name matching than DOS, and can optionally include searching down subdirectories.

It was published as part of a grep program! The program name is Grouse Grep, and I released it as part of an article entitled "High-Speed Finite-State Machines" in DDJ (November 1997). The grep program is very, very fast. It has one major bug -- it doesn't handle files with NULs in them.

Brenton Hoff

behoffski@grouse.com.au

Testing Java Classes

Dear DDJ,

In Krishnan Rangaraajan's July 1999 "Java Q&A" column ("How Can I Test Java Classes?"), the example MyStack class has at least one bug that was not caught by Krishnan's testing. If the stack is initialized to a size of Integer.MAX_VALUE, and exactly Integer.MAX_VALUE elements are pushed onto the stack, an additional push() will throw a java.lang.ArrayIndexOutOfBounds exception, and an additional pop() will throw a "Stack Underflow Error" exception. Neither seems like correct behavior, though there is no formal specification. Furthermore, since there is no formal specification, it is not obvious what the correct behavior should be after an instantiation of "new MyStack(-1)." The current implementation throws a java.lang.ArrayIndexOutOfBounds exception.

A program must be first specified, then written together with a proof of correctness. I recommend reading The Science of Programming, by David Gries (Springer-Verlag, 1981).

Martin Handwerker

martin@epiphany.com

The Version Control Process

Dear DDJ

As a longtime advocate of good automated version control at the development team level, I read Aspi Havewala's article in the May 1999 issue with considerable interest. I happily agree with his conclusions on what should and shouldn't be checked into a configuration management database, on the importance of good automated build procedures, and on the frustration of broken builds.

On the other hand, I was disappointed that he seemed to focus exclusively on the lowest-common denominator of version-management software. I feel that the article would have been much improved by a brief discussion of how to use automation to overcome some of the problems, and by at least briefly touching on how the various systems can help you avoid them.

For example, the system we currently use, CVS, has an excellent "diff" capability. Most of the others either have something like this or should. Encouraging team members to do a "cvs diff" or the equivalent across the source tree before committing (checking-in) their changes and verifying that they are all "intended" can prevent many problems.

The presumption of "locking" is also something that bothers me because, in my experience and seemingly yours, it is the single biggest source of developer misbehavior regarding version control practice. CVS requires no locking, won't allow the developer to commit changes if they weren't made against the current revision; then helps (usually without intervention) the developer to merge concurrent changes. Whole classes of usage and usability problems become nonissues with this approach, solving so many problems that I feel that any system that doesn't support it is mainly suitable for "Iron Mountain CM" use.

While it may sound like I am selling CVS, that is not really my point. A quote I've seen around the net says, "If what you're doing doesn't work, try something else," and ancient optimization wisdom says that using a better algorithm may get you an order of magnitude improvement over the best hand-tuning of the wrong algorithm. On the same lines, if locking discipline is a major source of headaches, eliminate it. If incomplete or inadvertent check-ins are a problem, provide and strongly encourage the use of tools that make it easy to avoid. Above all, automate anything you reasonably can.

It seems like I've forgotten at least a couple other points occurred to me while reading the article and writing this note, but it's already more than long enough.

Tom Culliton

culliton@clark.net

Aspi responds: Tom, thanks for the feedback. I truly appreciate your comments and have noted all of them. I do remember a conscious decision being made by us to write to the lowest-common denominator. We figured a lot of people use different configuration management software and wanted to address all of them. My old company still uses the command-line version of tlib. Certainly, your comments regarding CVS apply, but it would be inappropriate for me to mention a singular product in a generic article.

Your comments regarding the next generation of functionality being provided by your product also hold. Generating a discussion on the version control process and prompting people to think more about it was one of the primary goals of the article. Hopefully, your letter will generate a discussion that will prompt DDJ to publish another article on this topic.

DDJ


Copyright © 1999, Dr. Dobb's Journal