Prolog & Quantum Computing

Dr. Dobb's Journal May 2004

By Michael Swaine

Michael is editor-at-large for DDJ. He can be contacted at mike@swaine.com.

On any long journey, it's good to take readings from time to time to ensure that you haven't strayed from the true course. Whenever I think it's time to get my bearings in this "Programming Paradigms" column, I always go back to Thomas Kuhn, and I am invariably reassured. It was from Kuhn that I took the "paradigms" term that gives the column its rather fuzzy premise, and fuzziness is so forgiving.

The word "paradigm" became a pernicious and apparently ineradicable meme in Western culture with the publication of Kuhn's The Structure of Scientific Revolutions in 1962 (I have the Second Edition, from University of Chicago Press, 1970; ISBN 0226458040). Kuhn, writing about how scientists actually do science as opposed to how they think they do science, used the word in a particularly slippery fashion, or so his critics charged, letting it mean different things in different contexts. I agree that Kuhn's use of the word was slippery, although I suspect that this is exactly what made the paradigm meme spread so rapidly and widely. (To those who like their metaphors to be bound by the laws of physics, I apologize for making "paradigms" simultaneously slippery and fuzzy.)

Kuhn admitted to using the word in at least two different senses: as the collection of beliefs, values, techniques, and such shared by the members of a particular community; and as "the concrete puzzle-solutions which, employed as models or examples, can replace explicit rules as the basis for the solution of the remaining puzzles" of the particular community.

In this column, I've consciously allowed my use of the word "paradigm" to be at least as slippery as Kuhn's, to keep the constraints on my writing from getting too—uh, constraining. It's worth noting, though, that the sense of a paradigm as a puzzle solution used as an exemplar for solving other puzzles does have a direct parallel in programming: It sounds an awful lot like design patterns. It's also interesting, I think, that, if you take Kuhn seriously about how scientists tend to be conservative and work within the single paradigm they know, programmers look considerably more flexible and pragmatic than scientists.

Just to prove how flexible and pragmatic I am, the paradigms, if any, in this month's column will be based on some other sense of the word than those previously spelled out. Specifically, I'll be talking about Googlewhacking quantum computing and critiquing the logic of a claim that programming in Prolog isn't logic programming.

Googlewhacking Quantum Computing

Googlewhacking, as the world knows by now, is the sport of seeking pairs of words that produce exactly one hit when fed together to the Google search engine. Like "trickledown omelette" or "snowballing lipopolysaccharide" or "despotism fluidics." (Details can be found at http://www.googlewhack.com/.)

Aside from producing possible band names, Googlewhacking seems to have no redeeming social value, making it an ideal activity for bored biomedical students. So Mathew Smith and Christopher Morris of Cardiff University's School of Pharmacy have come up with a variant of Googlewhacking called "Pubmed Whack." In this game, you enter your two search words on the Pubmed search page (http://www.ncbo.nih.gov/entrez/query.fcgi) in the hope of getting back exactly one cited reference. The "Feedback" column at New Scientist magazine reports that "dentimer endocytosis" is a Pubmed whack, as is "mitochondria daffodil." This suggests (challenging my claim that Googlewhacking has no redeeming social value) that Googlewhacking or a variation on it could actually be of some use as a research tool, finding research topics that are still open. The idea of using it to do first-approximation patent searches also comes to mind.

Last month, I promised (threatened?) to write this month about quantum yaddayadda, where yaddayadda stands for such trendy topics as cellular automata, neural networks, artificial intelligence, and so forth. Not only had I in my daily websurfing actually come across articles on all these subjects, but "quantum" is clearly one of those terms that can be affixed to almost any trendy or stodgy subject to make it sound sexier.

So Googlewhacking the word "quantum," or better yet, the phrase "quantum computing," with other promising phrases like "cellular automata" and "feng shui," seemed an obvious thing to do.

Then again, maybe not. As it turns out, I was breaking the Googlewhacking rules by using phrases rather than single words. For what it's worth (nothing, by the official Googlewhacking standards), here are some of my successes—and other outcomes:

I thought I might use Googlewhacking and quantum computing to help the President find some of the things he's mislaid. My first attempt, combining "quantum computing" with "weapons of mass destruction," was a bomb—I got 415 hits. There's obviously a lot of scary research going on out there that I don't want to know about, but the Googlewhacking goal is to find the one match. Then I hit the jackpot by combining "quantum computing" with "the search for Osama bin Laden," for precisely one hit. Better get those quantum computers cranking, Mr. President!

Other one-hit wonders for "quantum computing" were "fair and balanced Fox News," "Mel Gibson's The Passion," and "Forth code." It's a lot easier when you use phrases. "Janet Jackson's breast" combined with "quantum computing" brought a disappointing pair of hits. And you can make what you will of the fact that "quantum Zeno effect" combined with "John Kerry" netted a perfect single hit.

"Forth code," though. That one bothers me. Is there no interest in quantum computing in the Forth community? No interest in Forth in the quantum computing community? Even Snobol did better. Are there really people doing quantum computing research in Snobol? The mind boggles.

Well, so much for my research methodology. I should get on with showing you some of the results of the research.

Topological Quantum Computing

Checking the exit polls, factoring out the undecideds, and adjusting for last-minute concession speeches, the results are: Quantum computing and cellular automata: 3280 hits. Quantum computing and neural networks: 6410 hits. Quantum computing and artificial intelligence: 8630 hits. Topological quantum computing: 65 hits.

Let's quit there. Topological quantum computing looks narrow enough and broad enough to be worth a look. And after some culling and printing, I have in hand a paper out of Microsoft's labs, authored by Michael H. Freedman, Alexei Kitaev, Michael J. Larsen, and Zhenghan Wang, dated September 20, 2002, and titled simply "Topological Quantum Computing."

Topological quantum computing is a nonstandard approach to quantum computing that depends on topological degrees of freedom rather than the quantum states used in those standard models. The standard roadmap for building a quantum computer has the following milestones—I'm quoting from Freedman et al.:

The article suggests that you don't really need the physical qubits. You do need decoherence-protected quantum degrees of freedom at the macroscopic level, but this doesn't actually require you to create physical qubits. You could instead encode a logical qubit in some other physical system that doesn't consist of physical qubits, so long as it has the necessary characteristics.

Lots of candidate systems lack one or more of the necessary characteristics, but the authors point out that there is a topological parameter of collective electronic systems, such as the fractional quantum Hall effect, that works.

And that's where it gets impossibly fuzzy for me. There's some very impressive talk in the article about annihilating particle-antiparticle pairs and qutrits (qubits with three states) and nonabelian anyons and topological defects and Abrikosov vortices and braid groups, and it all goes right over my head. But I think that the general idea is that you can encode quantum states in topological properties of matter in a way that dodges the decoherence problems of other quantum computing scenarios.

I know I shouldn't wade in this far over my head, but it's hard to resist an article that says things like, "[w]e initialize our system...by pulling anyonic pairs out of the vacuum." All I've ever been able to pull out of the vacuum is dog hair, so I'm impressed, even though they later refer to the "round circles" in a diagram. But who am I to challenge them: For all I know, they've figured out how to make square ones.

As I think I have made clear, this article is over my head; but I do get the conclusion: Although the implementation challenges of turning these topological quantum computing ideas into a working universal computer are daunting, "they are, perhaps, less difficult than a head-on assault on the accuracy threshold in the quantum circuit model" that represents mainstream quantum computing research.

The Quantum Zeno Effect

I mentioned the quantum Zeno effect above without defining it. The QZE could be helpful in the development of practical quantum computers, so let me take a crack at it.

The Greek philosopher Zeno argued in one of his famous paradoxes that motion does not exist because at any given instant, if you observe an arrow in flight, you observe a stationary arrow. Turning an infinite number of zero-motion steps into a nonzero motion doesn't work, because no number of zeroes adds up to anything but zero.

The quantum Zeno effect is not a logical paradox but an experiment observation: If you do enough observations of a quantum system like a system of sodium atoms trapped in a pair of laser beams, you reduce the probability of the atoms escaping in a process called "decay." In other words, you can make the system more stable against quantum decay merely by observing it a lot. The watched quantum pot boils slower, as one wag put it.

On the other hand, there is an anti-Zeno effect that speeds up decay with repeated observations. Seems like it's crucial to determine which effect applies when.

Logic and Illogic on Prolog

Q: How many Prolog programmers does it take to screw in a light bulb?

A: no.

(An aside: The Internet has democratized research. I am reluctant to use the expression "too much" in the same sentence with "democratization," but there are some changes about which I find myself taking a reactionary stance. Anonymous postings, for example. I don't apologize for the paragraph that follows this one, but I do think that it leaves something to be desired when compared with, say, the more conventional reference to Kuhn's book above.)

If you don't know about Kuro5hin.org, it's "a collaborative site about technology and culture" built in 1999 and maintained by a guy named Rusty, who currently lives in Peaks Island, Maine and writes web applications in Perl. Kuro5hin is supposed to sound like "corrosion," a synonym for "rust"—and his name is Rusty. Fun-nee. And blame Neal Stephenson's novel Snow Crash for the intrusion of the "5."

Regarding the obvious similarity of his site to Slashdot, Rusty says, "We borrowed many interface ideas from Slashdot, because we think they had many good ideas about the mechanics of a web discussion site." But it uses no Slashdot code; the site runs on software called "Scoop," written by Rusty and friends. He considers Kuro5hin to be "much more discussion-centric than news-centric [like Slashdot]."

An interesting paradigmatic discussion was recently triggered on Kuro5hin by a piece attributed to one tkatchev and titled "A Prolog Introduction for Hackers." tkatchev starts off with a bang by claiming that Prolog is "without a doubt...the simplest and the most straightforward...of all mainstream programming languages." So you see that he's not afraid of controversy. (The light bulb joke above is a dissenting comment, not my own, on the straightforwardness of Prolog's user interaction.)

What tkatchev delivers is, it seems to me, a clear discussion marred by unilluminating examples. As one critic points out, variations on "Hello World" are not necessarily the best way to demonstrate the unique virtues of Prolog.

He does make the insightful point that in Prolog, both I/O and arithmetic are hacks, features only grudgingly grafted onto the language. Most languages do something like this, and tkatchov's insight is suggestive: One way of appreciating the pure paradigm underlying any language might be to consider which essential language features are implemented via impure hacks.

What would be the logic behind creating a programming language in which math is a hack? Ask tkatchev and he (Okay, could be she) would probably say, the fact that it's just a query language for databases. Prolog, tkatchev says, is sort of like SQL, but where SQL supports simple searches on rich relational databases, Prolog supports rich pattern-matching searches on simple databases. What Prolog is not, according to tkatchev, is a logic-based language. If you persist in that delusion, he warns, you will get bitten.

That claim is so at odds with what people do with Prolog, and with what its developer had in mind for it, that it can't be left unchallenged.

Prolog was invented in 1971 by Alain Colmerauer, first implemented by him in 1972, and given the name Prolog in 1972 by Philippe Roussel as an abbreviation for "PROgrammation en LOGique," which Google tells me translates to "programming in logic."

Along with Jean Trudel and Philippe Roussel, Colmerauer started studying automated theorem-proving in 1970, looking particularly at the resolution principle of Alan Robinson. "We came across a very interesting refinement of this method: SL-resolution, and, in June 1971, we invited his author, Robert Kowalski...to visit our research team in Marseilles." Kowalski is an important name in logic programming and the author of Logic for Problem Solving (Elsevier/North Holland, 1979; ISBN 0444003657). Logic. Not databases.

In fact, Colmerauer has said that his goal in developing Prolog wasn't to create a programming language at all, but to develop tools for processing natural language. More specifically, he was interested in making deductions from chunks of natural language text. He wanted "to describe to the computer in natural language (French) a small world of concepts and then ask the computer questions about that world and obtain answers." Not to do database queries.

Colmerauer is still a respected researcher in logic and constraint programming, and he and his collaborators still refer to Prolog as a logic programming language. Not a database query language.

Lest you think that all the relevant books are ancient tomes, here's a new book that crossed my transom as I was finishing this column: Games, Logic, and Constructive Sets, edited by Grigori Mints and Reinhard Muskens (CSLI Publications, 2003; ISBN 1575864495). If you try Googlewhacking "Mints" and "Prolog" you'll find out that this professor of philosophy and researcher in automated logic is a confirmed Prolog programmer.

All of which ought to make it perfectly clear what Prolog is really for, but there is a twist that I'll get to in a moment. First, I want to mention the infamous cut operator. tkatchev calls it "a dirty, very confusing and dangerous hack" and says that it "will eventually make your programs impossible to read and introduce subtle bugs." Avoid it, he says, like the plague.

I'm not so sure that he's wrong in his characterization of this hack. The cut operator terminates backtracking at the place in the code where it is inserted. I think it makes the code hard to follow and debug and breaks the purity of the paradigm, but it can also make the code vastly more efficient. And unless you're just using Prolog to do database queries, I don't think you can do without it. Colmerauer introduced the cut operator and he says it's very useful. In fact, I think that it's essential for implementing logical negation.

Fortunately, it is often possible to eliminate cuts from your code. So you can 1. write logically clean cut-free code, 2. introduce cuts to make it more efficient, and 3. remove the cuts and keep most of the gain in efficiency. The cut-elimination technique is described by the aforementioned Grigori Mints in the aforementioned book, in an article titled "Quick Cut-Elimination for Monotone Cuts." If I'm reading Mints correctly, what he means by "quick" is efficient: He eliminates cuts without reintroducing the inefficiency that the cut was introduced to eliminate. The technique can be applied to any formula that doesn't contain logical negation or implication—supporting my belief that you can't do logical negation without cuts.

But I raised a flag a couple of paragraphs back, and it's time to revisit the question of what Prolog is really for. One of the discussants at Kuro5hin brought up RDF, the Resource Description Format, which is at the heart of the W3C Semantic Web project:

"RDF is basically Prolog assertions. The idea is that we're going to put these assertions all over the Web, and network them together, and use something like Prolog to get our computers to reason about stuff."

As I see it, RDF is the main reason for anyone to study Prolog today.

DDJ