Dear DDJ,
Edwin Floyd's "An Existential Dictionary" (November 1990) is a good example of software which raises questions about patents. It is my understanding that U.S. patents may be challenged successfully if they are issued to other than the inventor (prior art), or if the alleged invention is obvious to one trained in the art. But although Mr. Floyd's contribution, which strikes me as having widespread application, is now obvious to me, it was not so before I read about it in your magazine.
I don't agree with those who view the patent process as a scheme run by incompetents to prevent software creativity. The patent office does not defend patents issued -- ordinarily the inventor must do that -- but it will make an effort to understand the often-arcane languages used by applicants, and to issue patents when evidence indicates a possible advance in the art. Neither the office nor the applicant can say with certainty that an advance did occur.
And the inventor who holds a letter patent is enabled some control over the use of his invention for 18 years, but none thereafter.
I wonder if the letter patent of the copyright is the correct basis of a structure to legally sanction software. My preference would be a scheme analogous to the copyright of music. The author of a protected work would receive royalties from varying instantiations of his work, and mere changes in the language or identifiers used would not suffice to escape the copyright.
But I think we need some assurance that royalty fees will be reasonable, so that programmers can correctly assume use of copyrighted techniques as an element of their own work.
In the case of software developed at public universities, perhaps the taxpayers should receive rights to it. Surely those who claim MIT will not be able to authorize free use of its faculty's software are crying wolf. And if a student makes a contribution, does he lose the rights of the citizen or resident?
Back to Floyd's dictionary: Remember that the CRC algorithm is "bit-oriented" and only the significant bits of the key should be run through, thus avoiding trailing nulls and unused eighth bits. Also note that one may adjust the length of the CRC to be computed, by making the divisor one bit-position longer than the desired result.
In Floyd's application, for example, to set 4 bits in a table of 256 bytes requires a bit address of 8 + 3 bits length, and four instances of such a pseudorandom 11-bit number. To satisfy such a request, one could generate a CRC of 15 bits length, and take from it four differing groups of 11 bits. But upon such action, some bits will have been used four times, and others merely three. Thus, theory would indicate that it caused a decrease in entropy of the information. One may generate a CRC of length n by using a (preferably prime) divisor with 1 in the bit positions n and 0. The result is then in positions <n - 1> .. 0 (if we're shifting right, toward the little end).
Another problem found with some CRCs is that the divisor is (wrongfully?) chosen to be symmetric with regard to bit position, so that keys with the same word-length as the CRC length can be shifted in from either end. This is the old little-endian, big-endian micro-mainframe struggle. But a divisor with asymmetric placement of 1s and 0s will permit calculation of a CRC which is sensitive to the shift-in order of the key.
Should Floyd's existence table be used to statistically reduce the number of time-consuming "complete" searches, one could accept some failures of the technique used, as the wasted futile searches would be balanced out by the speed of the preliminary.
My variation of Floyd's algorithm initializes the "existence" table to a suitable proportion of 1s and 0s (100 percent of either being special cases), and then uses a marking scheme which forces some bits up and others down, according to the key. Rather than returning proof of nonexistence or probability of existence, the modified algorithm can be designed to return a statistical indicator of probability which is suited to the application. This estimator is always correct for the most recent entry, but can be expected to deteriorate in accuracy as more and more entries are written over it. However, even very "noisy" entries return a value potentially useful.
If Floyd's technique has been known for a long time, at least in theory, perhaps it is traditional. But it was new to me, and I thank him for publishing his findings.
Jon W. Osterlund
Greeley, Colorado
(Editor's note: Patents are valid for 17 years, not 18.)
Dear DDJ,
Lawyers! Win friends and influence people whilst making a killing: Patent your arguments.
Just imagine:
Like arithmetic, logical argument has been around for a long time. The courts, like computers, operate under rigid rules. Lawyers act like programs within the machine by using arguments built step by step. The application of an argument may be content-dependent (such as this one), or it may be generalized into a set-piece (this same argument when applied to, say, chess or football). Just strip out the terms and use variables: argument = chess move | football play; lawyer = grandmaster | 300 lb. fella; court = chess game | football game.
The point is, of course, that an argument can be patented just like an algorithm. I have not patented this argument, but I might. Until then, consider it as prior art.
Frederick Hawkins
Allentown, Pennsylvania
Dear DDJ,
I have just tried to read the article on software patents in your November 1990 issue. I could not finish it because my blood was beginning to boil. I was dumbfounded to see such simple algorithms being patented.
I wonder how many professors of computer science know that the XOR technique for cursors is patented. This algorithm is standard stuff for graphics classes. Every year, thousands of computer programming students break the patent when they write a simple cursor routine.
Computer algorithms must fall under the same rules as mathematical formulas and physical laws. In the past, when a person invented a new formula or way of doing math, they would get credit for its invention, but they wouldn't dream of patenting it.
Could you imagine what kind of a world we would have if Sir Isaac Newton patented his invention calculus? Every time you wanted to solve some math problem, you would have to send Sir Isaac some money, or buy a site license.
As Mr. Kapor mentions in his article, computer software is built upon mathematical foundations. Programs are akin to long, complex Boolean statements. How can one patent a formula or equation? (I guess the people at the Patent Office never saw the movie Young Einstein.)
I guess what determines what is patentable is what you are able to "sell" the Patent Office. If it tooks new and unique to them, it must be so.
Timothy C. Swenson
Alexandria, Virginia
Dear DDJ,
When I saw the one-inch high letters on the cover of your January 1991 issue announcing the "Software Design" theme, I grabbed a copy, eager to find out what you had to say on this critical issue. I enjoyed Michael Hagerty's piece on the use of a CASE tool to rescue a southbound system design. (I even circled the appropriate cell on the reader service card.)
I am getting really frustrated with many aspects of the software development business, especially in the world of business applications: incompetent managers placed in charge by senior executives who know little (or less) about the software development life cycle (Senior Exec: "I'll put Paul in charge: he's an accountant, but he did something on the installation of our general ledger package, so he must know all about computers ..."); incompetent, learned-it-on-the-job programmers ("Structured what? I've been in this business for twenty years. Nobody can teach me anything about programming ..."); absurd project schedules ("Complete specifications before you start coding? No way, there isn't time. Start coding now or you won't make your deadline ..."); et cetera, ad nauseum.
What will it take before software engineering is considered a real profession, requiring completion of a standard university curriculum and subsequent licensure, before putting code in a buffer for money? Software architects, as Mitch Kapor advocates, certainly, and soon; but I could only nod in sympathy with Michael Hagerty (grimly, mind you) when he pointed out that his system developer "was apparently unfamiliar with the most basic principles of software engineering." Why are people like that employed in this discipline in this day and age? How would society feel about neurosurgeons "apparently unfamiliar with the most basic principles of" medicine?
The use of CASE tools to represent the design of a system should be an industry standard, among many others. They're mature, stable, and well worth the investment, unless of course, you're one of those pseudocoders who never understood what CASE tools were for. Yes, I wax sarcastic, but would a contractor consider the construction of a building designed by someone who couldn't produce working drawings in accordance with professional architectural standards?
Clearly, good design is just as important to software as it is to commercial aircraft or artificial hearts. But I believe it should be considered in the context of the larger issue: the "professionalization" of this discipline. DDJ is a respected magazine, a voice that is heeded by programmers. I hope to hear it much louder in favor of professional software engineering standards whenever and wherever discussions of this onerous problem occur.
Andy P. Bender
Riverdale, Maryland
Dear DDJ,
Thank you for publishing the protest of Jonathan Titus (Letters column, March 1991) about the Mark-8 being created and published in the July 1974 issue of Radio-Electronics, six months ahead of the famous Popular Electronics headline cover "World's First Minicomputer Kit to Rival Commercial Models ... 'Altair 8800' SAVE OVER $1000" (I have it in front of me as I write, preserved in a plastic bag.)
Popular Electronics was the most popular electronics magazine in the world. I recall my extreme frustration in even trying to find Radio-Electronics in libraries in Dallas when looking for referenced articles. The SMU Technical Library didn't have it. I was unable to find it in book stores.
The Altair 8800 actually led some place, being a direct line to the CP/M-based machines that dominated the market until the Apple II took its slice (while often running CP/M itself). It established a standard card connection bus (however badly arranged) that still has uses.
I think the Mark-8 is like the Langley flying machine that flew into the river after catching on its launcher. It could fly, but it had a bad test, and the Wright brothers not only flew, but they proved they could control the plane, and then sold it to a job.
I never saw the Mark-8, but Mr. Titus's letter makes clear that "about a thousand circuit board kits were sold," plus sets of hard-to-get parts. I have to assume that it had no case and looked like a bunch of parts, rather than the apparently usable computer on the cover of Popular Electronics. The Altair included all parts for $397, could be bought assembled for $498, and I saw it running both at our meetings and at the Altair Store that opened later in town. A lot more than a thousand were sold and it generated clones that sold even more widely (IMSAI). A lot of people believed in the Altair and the Computer Hobbyist Group-North Texas blossomed after the Popular Electronics article, not after that in Radio-Electronics.
Mike Firth
Dallas, Texas
Dear DDJ,
I have a problem: A certain floating point multiply instruction does not work correctly on my 80386/80387-based AT clone machine. This has been tested on 387s in machines of four different manufacturers and they have all failed. It has also been tested on several 287s and 8087s and they have all worked correctly.
The problem instruction was first found after compiling the program CNEWTON3.C from the book Fractal Programming in C, by Roger T. Stevens, using Borland's Turbo C 2.0. The program's screen output in certain regions was a solid brown color when it should have been varying shades of blue. By using the debugging aids of Turbo C, the problem was traced to a double-precision floating point multiply instruction compiled from these lines of C code:
Xsquare = X * X;
Ysquare = Y * Y;
denom = 3 * ((Xsquare - Ysquare) *
(Xsquare - Ysquare) +
4 * Xsquare * Ysquare);The second multiply in 4 * Xsquare * Ysquare became a no-op in cases where Y was less than 2 {-1022}, but greater than zero. This happened whenever X,Y was trying to converge to X = 1.0, Y = 0.0. This would prevent the convergence; the number of iterations maxed out at 64 and the color brown was assigned.
After understanding the symptoms of the problem, I wrote a small program containing the lines
double X, Y1, Y2; X = 4.45014e - 308; X = X / 2.0; Y1 = 1.0 + X * 4.0; Y2 = 1.0 + 4.0 * X;
The value of Y1 is computed correctly to be 1.0, but the value of Y2 is computed erroneously to be 5.0. An equivalent program was written in Turbo Pascal 5.0 and it did not fail in either case.
To understand the problem further, I used Turbo debugger 1.0 to trace the code at assembly level. The number 4.45014e-308 becomes 001F FFFC 5DO2 B3A1 in 64-bit floating point. The leading ))1 is the sign and an 11-bit biased exponent. The mantissa is actually 1 FFFC 5DO2 B3A1 but the leading 1 is not stored since it is known to be one. When this number is divided by 2.0 the result is OOOF FFFE 2E81 59DO.
The biased exponent is now zero but the mantissa also changed because floating point numbers with a zero-biased exponent must have all bits of the mantissa stored in them. (This is because they are not necessarily normalized.) If this number were further divided by 2.0, the biased exponent would stay zero and the mantissa would shift off to the right and become unnormalized.
When the Turbo C code that fails was compared with the Turbo Pascal code that works correctly, it was found that Turbo C generated a 3-byte fmul instruction, while Turbo Pascal generated a 4-byte fmul instruction.
C: cs:02A6 DC4EE8 fmul
qword ptr[bp-18]
Pascal: cs:0176 DC0E3E00 fmul
qword ptr[MAIN.X]Apparently, the necessary and sufficient conditions for the failure are: 1. Using a 80387, 2. 3-byte form of fmul, 3. double- or single-precision form of fmul, 4. an operand in RAM with a zero-biased exponent and a nonzero mantissa. The nature of the failure is that the fmul instruction becomes a no-op.
Harry J. Smith
Mountain View, California
Dear DDJ,
I was very interested in "Designing an OSI Test Bed," by Ken Crocker, which appeared in your December 1990 issue. I work with SDLC and the SCC, and wrote a BITBUS driver using the original Zilog SCC (both Z85C30 and Z80C30).
Mr. Crocker wrote that he had problems with CRC checking. I have had similar problems in the past, and I have found an unexpected solution which might work with the Intel and his code, as well.
In SDLC mode, there is no need to give the 'ENTER HUNTMODE' command at anytime (not even in the Init routine)! The SCC will manage that for you. Let it do it; it knows what it's doing.
According to the Zilog manual, the RxCRC enable bit in register 3 of the SCC should be ignored in SDLC mode, but this seems not to be true in every case. My experience indicates that it is a good idea to leave this bit off (e.g., low). The SCC is quite a diva, and you have to think in curves and nodes to get it to work.
By the way, Zilog has announced a new version of the SCC called Z85C130 ESCC. This version includes a lot of improvements (deeper receive and transmit FIFO) which should make it easier to use.
A general question to the experts: Why does everyone use the 85C30 with processors like the 80x86? The Z80C30, with its multiplexed Address/Data bus is really the better solution.
Volker Goller
Aachen, Germany
In Listing One of the April 1991 "C Programming" column, the source code at the bottom of the second column on page 150 should read:
/* -- attach vectors to resident program -- */ setvect (KYBRD, newkb); setvect (INT28, new28);
Copyright © 1991, Dr. Dobb's Journal