Letters

Dr. Dobb's Journal November 2000

Graphics Algorithms

Dear DDJ,

Thanks for a very interesting discussion of intersection algorithms in the article "Triangle Intersection Tests," by Eric Haines and Tomas Moller (DDJ, August 2000). A fast 2D point-inside-triangle test, including calculation of barycentric coordinates, is in my opinion, most important for tomorrow's graphics hardware. But, even in this case, maybe there is still enough room for improvements. The Haines-Moller algorithm deals with a mathematical triangle, but in the real world, things might look simpler. We can precalculate matrixes of 16 X 16 pixels with all cases of triangles inside, where every cell inside a matrix contains (u,v) barycentric coordinates for the point, (2,2) for points outside the triangle and negative values for culled triangles. If a triangle is bigger then 16 X 16, we must use the Haines-Moller algorithm, although in the complex scenes (with many thousands of triangles) this will be a very rare case. Otherwise, we can render this 16 X 16 matrix, with appropriate hardware, similar to BitBlt. If we assume that values in matrixes are [of the] 4-byte float type, and that the topmost point of triangle is on the topmost line of the matrix, this precalculated lookup-table will have 165*2* sizeof(float) bytes=8 MB, which is acceptable texture memory loss for future 3D graphic hardware.

Mirza Hadzic

mirza@atlas.cz

Exploiting 64-bit Parallelism

Dear DDJ,

Ron Gutman's "Algorithm Alley" article (DDJ, September 2000) reminds us nicely that bit operations can often lead to surprisingly efficient solutions for common problems. I first saw the bit count solution as a bitboard manipulation used in the DarkThought chess project. More information on this and other useful bit operations can be found on http://wwwipd .ira.uka.de/Tichy/DarkThought. Based on this and my own tuning, I would suggest the algorithm can be enhanced to save several AND operations as follows.

static long k5=0x5555555555555555L;

static long k3=0x3333333333333333L;

static long kF=0x0F0F0F0F0F0F0F0FL;

private int CTPOP(long bc){

bc-=(bc>>>1)&k5;

bc=((bc>>>2)&k3)+(bc&k3);

bc=((bc>>>4)&kF)+(bc&kF);

bc+=bc>>>8;

bc+=bc>>>16;

bc+=bc>>>32;

return (int) bc&0xFF;

}

It is perhaps worth mentioning that several of today's CPUs -- CRAY, Compaq Alpha, PowerPC, and Ultra Sparc, for example -- include a native CTPOP instruction.

For the Intel x86, the CTPOP on a 64-bit word can be efficiently coded by using the 32-bit integers. Only the first two steps in the computation need to be duplicated for the top and bottom 32 bits. After this, the partial results can be combined and completed in a single 32-bit register.

Finally, in the Bit Reverse algorithm, the masks are not required in the first line as the shift operations will clear these bits anyway.

Phil Bagwell

pbagwell@iprolink.ch

Worker Shortage

Dear DDJ,

"Name Withheld By Request" ("Letters," DDJ September 2000) is obviously correct in saying that "...a good programmer is a good programmer, regardless of the language..." One can, therefore, presume that after a decade or so, a good programmer will have accumulated assorted languages on his own time. Nowadays, it is not like the '60s and '70s, when one had to pull strings to get computer access. Computers are inexpensive. Books and programming tools are cheap-to-free. The major vendors of operating systems and programming languages all have systems of examinations and diplomas. H1-B visas require "Labor Certification," meaning that the employer has to go through the motions of trying to recruit a citizen or permanent resident before hiring a foreigner, and the foreigner has to be paid the "customary wage." The remedy for an aggrieved American programmer is to become certified in new languages and apply for jobs in them. It is so obviously to a career programmer's advantage to learn new languages that failure to do so is a legitimate ground for concern.

The long-term trend over the last 40 years has been an increase in the number of systems analysts relative to ordinary programmers. I anticipate that this trend will continue. Increasingly, the new model of programmer is someone who works with end users and develops interactive systems to fill their needs, not someone who sits in a computer center writing batch-run programs from neatly typed Central Computing Data Report Request forms. At the extreme outer edge, there are all kinds of "consultants" whose jobs are to train or otherwise assist end-users. Working with end-users obviously puts a premium on the programmer's communications skills, and that is the great advantage of the American programmer over foreigners. Based on my acquaintance with foreign graduate students, who are probably a more elite group than H-1B admittees, I very much doubt that many of the H-1Bs can speak idiomatic English when they arrive. Further, there are a wide range of government-funded jobs for which American citizenship is a prerequisite.

"Name Withheld By Request's" circumstances may be special, but the obvious remedy would be to try to find something he can do better than a foreigner can.

Andrew D. Todd

u46a8@wvnvm.wvnet.edu

Computer History

Dear DDJ,

My compliments on the September 2000 issue of DDJ. You've provided lots of fun stuff. I especially liked David Birkett's Maxwell and the article on Zuse's computer by Raul Rojas.

I went to the International Conference on the History of Computing that Rojas organized in August 1998 at the Heinz Nixdorf Computer Museums-Forum in Paderborn (Germany). Raul had gotten a very full three days of presentations together by big name historians like I. Bernard Cohen and people who had been involved first hand in the early projects and the recent reconstructions before I even got wind of the gathering; but he invited me to bring a poster presentation on the antiaircraft gun-control design work done in 1939 and 1940 at MIT, RCA, NCR, Eastman Kodak, and Bell Labs, evaluating electronic digital computation. The rapid multiplier design in the ENIAC resulted directly from the MIT and RCA projects. Eckert's original ENIAC design performed multiplication with repeated addition, but when he was finally allowed to see the tech reports from the earlier gun director work, he quickly adopted the matrix selector switch to embody a multiplication table for one-step results.

The ENIAC Portable Function Tables also used the matrix switch principle. When von Neumann saw the design of the function tables, he couldn't believe they worked. He said they were just a massive short circuit. There's one of the differences between a mathematician and an engineer.

Which reminds me, I found what was probably the very first use of the concept of digital electronic computing in that early war-time gun control work. The engineers were calling these devices "pulse computers." When George Stibitz wrote his assessment of all the gun director projects, he pointed out in a classified memo that digital computers would be a better term, independent of the accidents of the physical embodiment of the processes.

Finally, I recall what one of my professors said in 1975, after visiting Doug Engelbart's Knowledge Augmentation Lab and using the mouse. He said, "I don't know about the human factors of such a pointer, you sort of have to do gymnastics." Experience really changes with time doesn't it?

Berney Williams

bwilliams@cmp.com

Analog Versus Digital

Dear DDJ,

Although I have never programmed in C++, and very rarely in C, I have been reading Al Stevens's DDJ column regularly for many years to get the philosophy expressed between the source-code lines. But I was most impressed by the philosophy expressed in your piece on MP3 in the September 2000 issue. (The fact there were no source-code lines didn't bother me a bit.)

I don't have a "golden ear," so I can't say that I can tell the difference between music played through a solid-state amplifier and one that contains vacuum tubes, even though I once built a Williamson amplifier. (I have no trouble in telling the two devices apart, by lifting them. Thus, I prefer solid-state audio equipment.) In fact, at my age, I can no longer hear the tape noise, so I don't even use the Dolby when recording. But, in spite of my observed inability to hear high single-frequency tones, and theory to the contrary, I can still hear the "high-frequency" sound when the bow picks up the strings. Thus, the engineering specs for an audio system do not have much meaning to me; I would rather listen to the music than read the specs. In any case, the difference between recorded music (no matter how expensive the audio equipment) and live music is instantly apparent to me. While (depending on the music) there may be great pleasure received from listening to recorded music, the "excitement" of a live performance is always missing. ("Miked" Broadway shows have that "recorded" sound, even though the performers are in front of me.)

If music recorded on CD is typical of digital music in general, there is even more lost when the analog signal is digitized. My existence theorem: I have two editions of the 1938 Benny Goodman Carnegie Hall concert. One is the after-the-fact "original" issue on a set of 33 rpm LP vinyl records. The other has been remastered to CDs. Nobody who has listened to both disagrees with me that there is more "life" in the old vinyl recording (even though it is not "live" music.). Even the no-Dolby tape cassettes I have recorded from the LPs (to save the surfaces) have more "life" than do the CDs.

Now, you said that although you could tell the difference in sound between the MP3 version and the CD version, you were not willing to state which sounded "better." How about the difference in sound between analog and digital recordings (ignoring surface noise). Maybe there are some things that just shouldn't be digitized.

Murray Lesser

mlesser@attglobal.net

DDJ