Dear DDJ,
This is in reply to C Snippet #38, which appeared in the November 1992 issue. The snippet showed a method to remove trailing blanks from a character string. It used a backward scan to find the trailing whitespace and then calculated an index for each character, until it reached a non-whitespace character. There are simpler ways to accomplish this.
The routine in Example 1 performs the same task as Snippet #38, but does it in a single forward pass. Like Snippet #38, it also skips over embedded blanks and handles all the special cases I can think of.
/**********************************************************
* rmtrail-Removes whitespace from a null terminated string.
* R. Richert 11-92
**********************************************************/
|include <ctype.h>
|define NULL '\0'
char* rmtrail (char *str) {
char *strptr. *gotBlank;
for ( gotBlank=0, strptr=str: *strptr: strptr(++) [
if (isspace( *strptr) ) {
if (I gotBlank) gotBlank = strptr;
}
else
gotBlank - 0;
]
if (gotBlank) *gotBlank = NULL;
return str;
}
One caution is in order. The braces separating the two If statements are necessary. Without them, the Else statement will line up with the wrong If statement.
I do not claim that this is the fastest and best way to remove trailing blanks, but it is simple, and it gets the job done. Isn't that what snippets are all about?
R. Richert
Sunnyvale, California
Dear DDJ,
Regarding Peter Smith's article, "LUC Public-key Encryption" (January, 1993), I'd like to point out that the public key d--and so the private key e, is strictly dependent on the message P to be sent; remember how to calculate r with respect to D =p2-2!
Furthermore, LUC isn't new. In 1981, W.B. Muller and W. Nobauer introduced the scheme in "Some Remarks on Public-key Cryptosystems" (Studia Sci. Math Hungar, 16), calling it the "Dickson Scheme." They also worked out r=lcm (p1e1-1(p12-1)... pT3-T-1(p12--1)), which is obviously not message dependent. Since then the system has been reinvented several times, mostly due to the unknown relationship between Dickson polynomials of the first kind and the Lucas sequence Vn(P,Q).
There also exists a "Cryptoanalysis of the Dickson-Scheme," by W.B. Muller and R. Nobauer (1986) in Advances in Cryptology, EUROCRYPT '85 (Springer-Verlag). The article by R. Lidl and W.B. Muller entitled, "Permutation Polynomials in RSA-Cryptosystems," Proceedings of Crypto '83 (University of California-Santa Barbara, Plenum Press, 1984) provides a more general approach towards new public-key cryptosystems by changing the exponentiation in the RSA cryptosystem against other permutation polynomials and permutation functions.
Willi More
Klagenfurt, Austria
Dear DDJ,
Peter Smith's nicely written article, "LUC Public-key Encryption," with its comparisons to the RSA public key cryptosystem, caught our attention. Like recent work on elliptic curves, LUC combines old mathematics with new cryptography and appears to offer some interesting properties.
Its advantages over RSA, however, remain unproven. Indeed, it is not clear that, as the author claims, "LUC will be at least as efficient [as RSA]." Many of the exponentiation heuristics that speed up RSA computation seem ineffective for LUC, while heuristics for LUC remain effective for RSA. Therefore, it is more accurate to say that RSA will be at least as efficient as LUC. Nevertheless, the speeds are likely to be comparable for a given key size.
Although LUC does appear to avoid adaptive chosen-message forgery, it is susceptible to another type of forgery. Suppose P is a message and k is a signer's private exponent. Then Vk(P,1) is the signature of P. Given that signature, an attacker can forge the signature of the message Vn(P,1) for any n. Since the attacker knows Vk(P,1) and n, he or she can compute Vn(Vk(P,1),1). According to Example 4(e) in the article, Vn(Vk (P,1),1) = Vnk(P,1)=Vk(Vn(P,1),1), which is the signature of Vn(P,1).
Overcoming this "existential" type of forgery appears to require a hash function or message formatting, contradicting one of the claimed advantages over RSA. Since a hash function allows one to sign messages of arbitrary length with a single RSA (or LUC) encryption, the property of not requiring a hash function is not necessarily an advantage.
Taher Elgamal,
Director of Engineering
RSA Data Security
Burton S. Kaliski, Chief Scientist
RSA Laboratories
Peter responds: Willi More's point -- that r is dependent on the message-- is covered in the listings that accompanied the article. While at first sight this dependency appears to invalidate the LUC scheme, it can readily be seen that there are only four possible d values (for a fixed e) that can be precomputed. The correct value for a particular message is chosen by the calculations outlined in the listings. This actually adds to the security of LUC, and does not seriously affect the decryption time.
Willi also pointed out the earlier work of Muller and Nobauer, of which I was unaware until after the article was printed. They describe a remarkably similar system, but do not provide any practical way of actually carrying out encryption of a message. (They use high powers of irrational numbers!) Using their r instead of my (message dependent) r would lead to a doubling of the decryption time.
In summary, LUC is both practicable and new. The article contains a reasonably full description of the mathematics behind it.
Taher Elgamal and Burton Kaliski's courteous letter makes three points about LUC. First, the heuristics necessary to give LUC parity with exponentiation are admittedly more complicated, but certainly not ineffective: LUC has yet to meet an exponentiation heuristic that can't be adapted to Lucas-function calculation. Exponentiation heuristics are a well-studied field. It may well be that more effective heuristics await Lucas functions, whose study for these purposes is quite new.
The second point, that LUC appears "to avoid adaptive chosen-message forgery," is the most important advance of LUC over RSA, and I appreciate the professional integrity of RSA Laboratories in recognizing this fact.
The presence of "existential" forgery in LUC may require message formatting. This type of forgery produces random, nonsensical results, and is regarded as a minor nuisance rather than a meaningful forgery. LUC's message signing will use hashing, but only as an elective message-digesting process. RSA, on the other hand, requires hashing for this purpose, as well as to guard against adaptive chosen-message forgeries.
RSA has done a stalwart job leading the fight to establish public-key cryptography over the last fifteen years. It appears that the mantle now passes to LUC, RSA's match for speed and its superior in terms of cryptographic security.
Dear DDJ,
Regarding Scott Guthery's "A Curmudgery on Programming-language Trends" (December 1992), why was the article given such a title? Are programming-language trends so holy that to criticize them makes one a curmudgeon?
A computer language isn't much more than a means to an expression. Every elegant design, whether hardware or software, begins with the phrase, "let A be Z." That is, the problem is so well understood that a starting place is declared, and everything else follows. The final code can be written in any language understood by a computer and an elegant design will stand.
Obviously, writing in C is faster than writing in assembly language. With practice, either assembler or C code can be written to be reused with a minimal effort. Scott is absolutely correct when he suggests getting on top of the learning curve of whatever tools you use. After mastering those tools, take the time to write some tools of your own.
Do we need C++? I've been using object techniques since writing my first concurrent scheduler in 1978. The fact that C++ translates into C first tells you that everything you need already exists in C. Will using C++ make you a better programmer or a better program designer? I doubt it. If it takes C++ to give a new perspective on programming, shouldn't we ask why such a perspective has to be bundled with a specific implementation language? Isn't such a perspective applicable to programs in general? Does using a schematic editor make one a better engineer? Underlying principles must be understood before elegant designs are possible. Once first principles are understood, C is more than powerful enough to program in. In fact, it is superior to C++ because there are fewer hidden details to trap the blissful. Data hiding may be appropriate in some circumstances, but when your compiler or operating system hides details that prevent specific debugging, then it's a hobble.
Another constant has been the promise of the latest language of compiler. Software developers know that they have to continue promising, and magazines love to help them purvey their promises. This set of conditions guarantees a popular future for C++ and all the languages and dialects sure to follow. That popularity will be used as evidence of the value of the latest programming fad.
Programming productivity won't be boosted much by any programming language. Take a look at a modest program. First, there is a set of decisions that are forced by the application. That is, how do you squeeze the application problem into a computable context? What are the general rules and what are the exceptions? Let's call all the decisions forced by the application "X." Next, there are all the internal coordination decisions that are necessary to bind the application decisions into a coherent program. Let those decisions be "Y." X+Y decisions must be coded before a program works. If a poor design translates the application into a large X, and that is compounded by structureless convolutions of Y, then the result is obvious. Productivity requires that we reduce the sum of X and Y, not how we finally code the problem. Focusing on a language or a dialect is part of the problem, not part of a solution.
David Smead
Seattle, Washington
Dear DDJ,
This is in response to the October 1992 letters from Donald Kenney and Shohei Nakazawa concerning my "Japatent challenge" ("Letters," July 1992).
Reader Kenney correctly delineates the broad parameters involved in translating a document from one language to another, but those parameters might be tightened somewhat, as follows.
First, the text of a patent involves primarily the communication of a technology. This may relieve the translation burden somewhat over that involved in translating, say, a novel. Technical words are largely the same in both English and Japanese. Kenney's idea of using two university students (instead of software!) to translate and proofread is a good one.
Second, a reference to the original American patent would appear in the translated patent; thus, inaccuracies such as the example Kenney gives (translating "track jam" to "jelly on the track"), would be embarrassing but not serious.
Third, one might translate using a software dictionary of phrases instead of just words and even groups of phrases, "ideas," while focusing on the particular technical area of immediate interest.
As to whether the Japan Patent Office might someday accept patent filings in English, this may not be as improbable as one thinks. After all, my letter of inquiry to the Japan Patent Office (in English) was promptly replied to with one which was quite extensive and detailed, and in perfect English! Many scientific papers written in Japan are written in English. And I recently attended a technical conference at the University of Tokyo which was entirely in English, except for one Russian lady who spoke in Russian with a Russian-to-English translator from the local Russian embassy!
Indeed, I hereby predict that within approximately a year after a practical, affordable translation program is developed, Japan will begin accepting patent applications in English. (That prediction is based on one of Murphy's laws.) Already the Japanese have taken a giant step forward by allowing patent applications to be on computer disk; this relieves us of the burden of printing out all those thousands of characters.
On that note, I would like to acknowledge the assistance of Pacific Software Publishing, and thank them for sending me a copy of the AX Technical Reference Guide: Kanji at Environment. This 318-page handbook contains, among other things, the double-byte codes for many, many Kanji characters. It is printed in Japan (in English) and is a must for anyone pursuing this subject.
Homer B. Tilton
Tucson, Arizona
Copyright © 1993, Dr. Dobb's Journal