LETTERS

Birds of a Feather

Dear DDJ,

I am very glad that DDJ tries to inform the "ordinary" programmer about recent developments in connectionism. Generally these articles are very informative. In Michael Swaine's "Programming Paradigms" column "Neural Nets: A Cautionary View" (November 1990), though, I read some things I disagree with.

Swaine says that Fodor and Pylyshin's (F&P) critique of neural nets are relevant to the potential of neural nets as a programming tool. This is simply not true. F&P's critique could be of some importance for the assessment of neural nets as a psychological model (although I would argue on that, too). As a programming tool, neural nets could be of great importance (and surely they will be) even if they fail as psychological models, and that I doubt. I don't think it is necessary for neural nets to model some "real" psychological process or structure to be a good programming tool. Take, for example, learning neural nets. Most implement the so-called "backpropagation rule," a learning rule that is surely not to be found in real brains. The point is: Backprop works (though I can think of better learning rules). Nobody would think of jet propulsion as bad means to fly, just because it doesn't function the way the wings of a bird do, so why should neural nets model nature?

I also disagree with the notion that symbolic processing is really necessary for neural nets to be truly relevant models of psychological phenomena. On the contrary, I believe that the processing of language, for example, could be implemented subsymbolically. This kind of representation being in the connections (not the nodes) body as weight and spike frequency (with some synchronization to realize attentional processes) comes very close to what we know about representation and processing of knowledge in the brain. Then only the input or output has to be symbolic. F&P's critique is surely relevant for older neural nets, but current research concentrates on modular self-organizing neural nets (no backprop, sigh) with more sophisticated connections, and these neural nets won't have the weaknesses of most of the original ones.

My conclusion is that there will be two (loosely related) mainstreams of connectionism: the engineering/programmers' connectionism dedicated to real world applications (with no psychological relevance -- like expert systems) and the psychological connectionism. It is very likely that future connectionists will have to choose between jet propulsion and the wings of a bird....

Christian van Hoven

The Netherlands

Tracing Ray Tracing

Dear DDJ,

I would like to commend Dan Lyke on his article on ray tracing. It was understandable and interesting. I have been interested in the graphical aspects of computer programming since I first started programming in C, three years ago. Before this change in perspective, I had been programming on mainframes and minis in Fortran. (What a difference a language makes!)

I took a computer graphics course during my graduate studies to flesh out my self-taught graphics programming. What an eye-opener! The mathematics required to accurately simulate the real world is somewhat tedious. Dan's simplification may mislead some to think that it would be easy to implement a 3-D ray tracer in this fashion and not run into difficulties. Anyone who has read the classic text by Newman and Sproull will realize that throwing around 4 X 4 matrices is not trivial.

At the risk of seeming a total bore, consider the generation of a viewing transformation (VT) matrix. The matrix itself must be a 4 x 4 matrix due to the homogeneous representation of the world as developed by early geometers for working in projective geometry. VT is formed by:

Once VT is formed, it is used for all translations from the world to the virtual screen. Additionally, VT provides another useful function. Inverting VT provides a means of going from the screen (pixels) to the world for implementation of a ray tracer.

Michael R. Schore

Redlands, California

Bezier Business

Dear DDJ,

I enjoyed Todd King's article, "Drawing Character Shapes with Bezier Curves" in the July 1990 DDJ, but more importantly, I found it extremely practical in the context of one of my projects. Magicorp is a slide service bureau. We accept files from many software packages such as Applause, Harvard Graphics, Freelance, Designer, Artline, etc. and render them into very high resolution (4032 x 2688) 35mm slides and overhead transparencies. We have all 207 Bitstream fontware fonts in our font library with the character shapes defined as straight vectors. We did this because the only way we knew of rendering Bezier curves was from the parametric equations, and this method was too slow for our production system. Now that we know about the deCasteljau algorithm, however, we can save considerable disk space by changing our font library to represent character shapes in their original Bezier format without too much performance degradation.

I was wondering if Mr. King could supply me with some reference for further reading. In particular, I would be interested in the references that originally made him aware of the deCasteljau algorithm, as well as any other papers or books on the subject of which he is aware.

Philip N. Jacobs

Elmsford, New York

Todd responds: It's interesting you should ask what led me to the deCasteljau method. The original draft of my article did not contain information about the deCasteljau method of calculating Bezier curves. When DDJ technical editor Ray Valdes looked at the article, he recommended that I also look at the deCasteljau method and directed me to CAD: Computational Concepts and Methods, by Glen Mullineux (MacMillan Publishing Co.). The chapter on representing curves has a good discussion of Bezier curves and the generation description of the algorithms. This is a good place to start. The references in the book should lead you to the original (first generation) descriptions of the algorithms by Bezier and deCasteljau (as well as others).

In writing the article I also referred to Fundamentals of Interactive Computer Graphics by James D. Foley and Andries Van Dam (Addison-Wesley, 1984). A reader of DDJ also recommends Algorithms by Sedgewick (Addison-Wesley, 1988). I would also refer you to the "Letters to the Editor" section of the November and December 1990 issues of DDJ, since some readers have sent in comments on how to improve upon the efficiency of the implementation presented in my article. Their comments should also prove useful.

B-tree Business

Dear DDJ,

I enjoyed the article "The B-tree Again" by Al Stevens in the December 1990 DDJ. I appreciate in particular his focus on practical implementation of tools for people who don't want or need a lot of theory.

I ran into some trouble when considering how the key handling mechanism would support integers. It occurred to me that the definition of the keyspace within a treenode as a simple character array could lead to trouble on some machines. I didn't notice any mechanism for preventing integer values in the keyspace from being misaligned on machines that require integer alignment on word boundaries.

On some machines, this can merely cause performance degradation, on others (some of the new RISC architecture processors), this will lead to bus exception errors, i.e., the dreaded "Bus Error, Core Dumped" message from Unix. I hope Al can clarify his approach to this problem for me.

Mark Rosenthal

Louisville, Colorado

Al responds: The B-tree algorithms in my column treat keys as fixed-length character arrays. If I need to use an integral value for a key, I encode the value as an ASCII string. This method uses more space for keys but is less dependent on computer and compiler architectures. To use binary integer values, you would need to address the function that compares keys as well as the alignment problems you have mentioned.

Who's On First?

Dear DDJ,

Michael Swaine's recent article, "Fire In The Valley Revisited" (January 1991) gives the impression that the personal computer revolution started with the MITS Altair computer kit. It didn't. There was a great deal of activity prior to the Altair.

In the early 1970s, many of us were members of Steve Gray's Amateur Computer Society -- a group of dedicated hardware hackers who were building their own computers and computer circuits. Several members cloned versions of Digital Equipment Corporation's popular PDP-8/L minicomputer. The group published a lively newsletter for computer hobbyists.

In July 1974, Radio-Electronics magazine featured my Mark-8 computer on its cover. The computer construction project used Intel's 8-bit 8008 microprocessor chip, and the computer allowed for as many as 16 Kbytes of static RAM. (At that time, a hard disk for a PDP-8/L minicomputer furnished 32K 12-bit words.) Interest in the Mark-8 was very high and about a thousand of the circuit-board kits were sold. Several mail-order companies offered kits of hard-to-get components. Radio Electronics sold many of the complete booklets that gave all of the construction details and circuit-board layouts. Over the years I've talked with and met many people who built and used the Mark-8. The original Mark-8 is now on display in the Smithsonian Institution's Information Age exhibit in Washington, D.C.

No less an authority than Robert Noyce, the chairman of Intel, recognized the Mark-8 as the first true personal computer. Sure, there were other small computers available at the same time, but none were accessible to an electronics hobbyist or computer buff. The Mark-8 put such a computer in the hands of those people. At least one computer company got its start because of the Mark-8. Some readers may recall the Digital Group, a company that provided a line of CPU-interchangeable computers, many of which were adopted for regular commercial use.

The Mark-8 also spawned at least one publication prior to the Altair. As I recall, Hal Singer and John Craig started the Mark-8 newsletter out in Camarillo, Calif., shortly after the computer appeared in Radio-Electronics. Craig later went on to Infoworld. There were many users groups in the USA, too. Many of these evolved into the groups and clubs that supported the Altair, IMSAI, PET, Apple, and other computers. The clubs and the people were already receptive to computers when the Altair came along.

Keep in mind, too, that the Mark-8 actually worked, right from the first unit. The design was thoroughly tested so that it would work properly whenever a hobbyist constructed a computer. Altair builders weren't so lucky. Many of the original versions didn't work at all, nor were fixes or support readily at hand. Whenever I fired up my Mark-8 -- even as late as 1988 -- it always worked. I still have two nonworking Altairs that one day I'd like to get around to putting in working condition.

I'm not denigrating the Altair. It was an important link in the chain of personal computer advancements made during the last 17 years. However, let's not revise history and put the start of the PC "revolution" at January 1975. It took place months before.

I wish I could recall more history of the "early days," but most of my source material went to the Smithsonian with the Mark-8. I still have models of and documentation for many older computers, though. Who knows, maybe there are others interested in preserving and restoring these fossils of the computer age.

Jonathan A. Titus

Editorial Director

EDN Magazine

Milford, Massachusetts

Always the Optimist

Dear DDJ,

In reference to Jeff Duntemann's article "Sex and Algorithms" in the October 1990 DDJ, my best guess is that Zeller's Congruence doesn't extend past the year 2000 because Zeller didn't figure that the world would last past the year 2000.

David M. Raley

Laurel Hill, N.C.

Patents, Shapes, and More

Dear DDJ,

I read the "Software Patents" article by The League for Programming Freedom (November 1990) and have a few comments. I have never run into a patent problem, at least not yet, and I hope I never do. I see this as a chicken and egg problem: Which is most important -- the algorithm or the software that uses it? On one hand, certain algorithms may make some software work more efficiently, but what is the algorithm's value in the overall success of the software? I have some doubts about the patent holders going after, legally, users of their ideas, except where there is a deep pocket to pick. And from the article itself, it seems that a few companies just buy up patents and go looking for a successful product that uses their patented algorithms. And for them, it's a very good business; they don't have to market products -- just hold the patent and retain a legal firm. So in the modern world you don't have to produce anything -- just collect from people who do. What an idea!

The article "An Existential Dictionary" by Edwin Floyd (November 1990) was particularly well done. It showed some of the thought processes and mistakes that are always part of a project. Perhaps Mr. Floyd will write more articles in the future.

In addition, I found the geometric shapes on the cover and interspersed among the articles to be fascinating, especially since they were made of paper and used no glue. I was wondering if you know where I could get a book about modular origami.

William Tennyson

Columbia, Missouri

Editor's note. For more information on modular origami, write to Vicki Mihara Avery at P.O. Box 371144, Montara, CA, 94037. Vicki is the artist who provided the origami for the November issue.

How Fast Is Fast?

Dear DDJ,

In Bruce Tonkin's article on PowerBasic (July 1990) he mentions that the expanded string space (compared to Quick-Basic?) in that compiler carries a small penalty of slower operation due to the larger memory spaces available for PowerBasic's string operations. The tables on pages 76 and 77 show the MID$ operations to be about 3.4 times slower in PowerBasic than they are in the QuickBasic 4 and 7 version compilers.

My feeling is that 3.4 times is not a small difference when you consider what the MID$ operation does in many commercial programs. Many people use the MID$ function to move data in sort buffers and/or text-editing buffers, where the buffers range from 30 Kbytes or so in size up to several hundred Kbytes, and the string-shifts need to be nearly instantaneous.

Basic's capability to do these string moves is just adequate in the Microsoft compilers using small buffers on a PC or large buffers on an AT, but would be unacceptable on these same machines using PowerBasic. What does Bruce think?

Dale Thorn

Round Lake, Illinois

Bruce responds: I can't agree that the time difference for the MID$ operation is important. Yes, PowerBasic is slower, taking about 80 seconds per million operations compared to about 25 for QuickBasic 4.x or Basic 7.0. A meaningful comparison is not that easy, though, as my review mentioned.

Few programs will need to do anything like a million MID$ operations. For reasonable programs, several thousand to ten thousand operations will be more typical -- and for them, the difference will be much less than a second.

Further, PowerBasic allows fully dynamic string space to be over 400K on a 640K machine. QuickBasic and Basic 7.0 will not allow more than 64K per array (and under QuickBasic 4.x, the limit is more like 50K with no other dynamic space available). The only way to get more than 64K in a single string array with any Microsoft Basic is to use fixed-length strings, and to get 128K or more the strings lengths must be a power of two.

Also, PowerBasic removes the need for many MID$ operations. There is an equivalent of the FIELD statement that can be used on arbitrary strings. So you can look at or assign any part of any string without using MID$ at all -- and PowerBasic's assignment operation is actually a little faster than QuickBasic's. That kind of thing is ideal for changing record buffers.

Let's take a sort program that uses large record buffers. I'll assume that the individual records are no more than 32 Kbytes. Here's what happens when you write that application in QuickBasic or Basic 7.0, compared to PowerBasic:

  1. The Microsoft versions will limit the dynamic string space to 64 Kbytes per array, forcing a sort that uses dynamic string arrays to be much smaller than memory. PowerBasic allows the programmer to use all available memory for dynamic strings.
  2. Fixed-length strings must be predeclared as to length (a power of two if an array of 128K or more) in the Microsoft versions, meaning that a general-purpose sort is much more difficult to write. Microsoft's fixed-length string assignment operations are slower than their dynamic equivalents by a factor of about two. PowerBasic strings are fully dynamic.
  3. In QuickBasic or Basic 7.0, you'll have to write your own sort, and you'll need to use MID$ to sort on the middle part of a string. In PowerBasic a sort is built-in, and you can specify the starting position for the sort -- no MID$ is required.
  4. All versions of Microsoft Basic slow down drastically for string operations as string space becomes full. PowerBasic actually becomes faster. If you're running close to the edge, PowerBasic can show astounding speed improvements over QuickBasic or Basic 7.0. This is the kind of thing that's hard to put into a benchmark table (how full is "full"?) but can be worth plenty in an application.
You mentioned text-editing buffers. The PowerBasic functions that allow you to strip any leading or trailing characters, or remove any unwanted characters from a string, can get rid of a lot of otherwise hand-coded routines -- again, removing the need for a lot of MID$ operations.

For the last six years, I've sold a word processor written in Basic. To get a version that allowed text files of more than 64 Kbytes using QuickBasic, I had to store text in fixed-length string blocks and convert it between dynamic strings and blocks. I wrote all the allocation, deallocation, and garbage-handling routines myself. It was not a pleasant job, and debugging was a pain. With PowerBasic, I removed those routines -- and the result ran a lot faster. The search and replace functions still use MID$, but run as much as ten times as fast because there's no need for blocking or deblocking with PowerBasic.

Raw benchmark numbers can be valuable. They can also be misleading or irrelevant. I can understand your concern with a factor of 3.4 speed difference, but in this case I think it's unlikely to make any difference in your applications; PowerBasic's other advantages can overwhelm the effect.

I do suggest you buy a copy of PowerBasic and write some applications to take advantage of the new features. Though MID$ may be slower, I think you'll find (as I have) that you'll need to write less code to get the job done. That was the point I tried to make in the review, and perhaps I didn't make it well enough.

Summing Up Patents

Dear DDJ,

I am writing this letter in protest to today's situations concerning software piracy and patenting algorithms. As a 13-year old, who's sole income is gained from mowing lawns, allowance, and presents, I cannot always afford the software I need. I try shareware, and some is good, but a lot of it stinks. I am currently scrounging to buy QuickC so I can learn C. In a way, I view software piracy as "illegal shareware." Many people will get a copy and try it out. If they want to use it, they probably will purchase it anyway. Some software is too overpriced: $389 for Lotus 1-2-3?

Patenting algorithms is the stupidest thing I have ever heard of. Who can tell you not to multiply by adding x to itself y times? Same for other formulas.

Jonathan Cooper

Clearwater, Florida


Copyright © 1991, Dr. Dobb's Journal