Phil is president of NeoCortek Inc., a developer of assistive technology. He can be contacted at phil_mitchell@neocortek.com.
While Alan Turing is known to mathematicians for his contributions to the theory of recursive functions, and to AI cognoscenti for the Turing test, he was nonetheless the ultimate hacker. Puzzler, tinkerer, happy amateur, and clever as all hell, Turing designed and built one of the world's first stored-program, digital-electronic computers-then stayed up all night programming it. (Well, okay, he only helped design and build it.) In the process, Turing grasped the challenges and potential of the modern computer with unique vision. At a time when many still thought only of calculating missile trajectories, Turing was more interested in artificial intelligence and machine learning. He had no working machine to play with at the time-instead, he wrote his own floating-point routines, toyed with high-level languages, and pondered the idea of self-modifying programs.
Alan Turing: The Enigma, Andrew Hodges' biography of Turing, which appeared over a decade ago but remains definitive, is straightforward, detailed, and sympathetic. It focuses on Turing's work but also provides useful social and personal context. Turing's intellect and achievements brought him into the company of giants, among them John von Neumann, Claude Shannon, and Ludwig Wittgenstein, each of whom makes an appearance here. In this wide-ranging book, Hodges' only obvious blunder is in a brief and misconceived attempt to link Turing with Wittgensteinian philosophy.
From his early life as a lackluster student languishing in English public schools, Turing became a promising math candidate at Cambridge. His first major triumph was his 1935 solution to renowned mathematician David Hilbert's Decision Problem, in which he showed essentially that there could be no general algorithmic method that would establish the correctness of mathematical theorems. Along with Godel's earlier results, this brought to an end Hilbert's formalist program: the attempt to systematically complete the project of mathematics by formalizing (and even mechanizing) the processes of mathematical proof. But Turing's result was not at all only negative; essential to his proof was his idea of a "universal machine"-now known as a Turing machine-which provided a conceptual foundation for his work on an actual computer.
Then World War II broke out and Turing was assigned to the cryptanalysis unit at Bletchley Park. There, with a few other mathematicians, engineers, and one classics don, he toiled to crack German radio transmission codes. Most of these codes were based on the widely known Enigma machine and its periodic polyalphabetic cipher. Variations across the Wehrmacht branches and frequent modifications and upgrades to Enigma during the war ensured that the pressure at Bletchley Park never eased. As Hodges delineates in fine detail, only an inspired combination of combinatorial insight, mechanical ingenuity, luck, and perseverance enabled these crypto-amateurs to decipher vast amounts of German message traffic, making an inestimable contribution to the Allied victory.
Turing learned an extraordinary amount at Bletchley Park; the reverse engineering of Enigma, the flexible applications of symbolic representations, and the vast scale of computational organization needed to succeed at cryptanalysis catapulted him into work on the physical embodiment of his universal machine after the war. If only the people around him could have kept up! He began designing his computer at the National Physical Laboratory; but after several years of institutional hindrances, he abandoned the project and took a position at Manchester University. There, though he wouldn't get to design from scratch, he would at least be able to play with a working computer. And play he did, staying up nights to coax the delicately cobbled machine through his code, monitoring the cathode-ray tubes that provided both storage and display. Thirty-two lines of thirty-two dots constituted the 32-bit binary user interface! Only a year before, he had written: "This process of constructing instruction tables [i.e., programming] should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself."
It's sad to realize that this consummate hacker never got to hack beyond this earliest of crude computers; what he might have tried with a workstation! Gradually alienated from the Manchester environment, Turing turned his attention elsewhere within a few years; he became especially interested in fundamental questions in embryology and the development of organic form, attempting to hack the physical chemistry of biological organization. He also was arrested, tried, and convicted of engaging in homosexual acts, for which he was sentenced to a year of experimental estrogen therapy. Two years later-at age 42-he was dead. Though apparently a suicide, a more paranoid explanation might center on those in the British intelligence services who surely preferred this "pervert" with the highest security clearance dead. He was buried without an autopsy.
Hodges' biography is a stimulating account of a remarkable man, one who worried that his educable machine could not be sent off to school "...without the other children making excessive fun of it." Had he lived, he could have judged for himself the prophecy that he made in 1950: "I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game [Turing test] so well that an average interrogator will not have more than 70 per cent [sic] chance of making the right identification after five minutes of questioning. [...And] the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."
Alan Turing: The Enigma is available throughout the world, except in the U.S., where it is unfortunately out of print. It is available, however, from the Internet Bookshop. Check Andrew Hodges' home page at http://www.wadham.ox.ac.uk/ ~ahodges/ for details.
If Alan Turing was the first programmer, Bruce Blum wants to be the last. Not that he wants to do away with programming, just put it in its place as a specialized tool in the software design process. According to Blum, software design methods that focus on programming are missing the big picture.
In Beyond Programming: To a New Era of Design, Blum brings us this picture from a rare perspective. In 1994, he retired from the Applied Physics Lab at Johns Hopkins University, after 32 years in computing. During that time, he served variously as programmer, designer, and project manager, mostly on large systems such NASA's Landsat Data Handling System, and Navy command and control systems. In addition to over a hundred published articles, Blum has written four books, including a textbook on "traditional" software engineering, as well as another book on the alternative software engineering environment (SEE) that he describes in Beyond Programming.
Blum's argument begins with the fact that large software systems are currently designed in a two-step process: Specify requirements, then implement the spec. Blum compares this to constructing a building: Prepare a blueprint, then implement the blueprint. And the reason it's terribly important to finalize a blueprint is the cost of making changes to a building plan once construction starts; in that context, a two-step process makes sense. But, he argues, as software systems have become larger and more complex, it has become increasingly difficult to create stable and valid requirement specifications. (One IBM study estimated an average of 25 percent shift in requirements after implementation starts.) Rapid prototyping, domain modeling, and other approaches may help minimize this problem, but they don't go to the heart of it, which is that for many projects this two-step model is simply inappropriate. Whereas adding an elevator shaft after the plasterboard is installed in a building is bound to be expensive, Blum insists that the wonderful thing about software is that it need not have the same inflexibility. Moreover, it mustn't have it, because increasingly we are designing systems for which the end users simply cannot adequately specify their needs in advance. Such systems can't be built, they've got to evolve. For such systems, Blum argues, an "adaptive design" environment is essential. Instead of architecture, sculpture; instead of a "build-to" spec, an "as-built" spec.
What makes this more than just another pretty idea is the fact that Blum can point to a working version of this adaptive environment which has been used on major projects. He doesn't claim it's the only, nor necessarily the best-but it constitutes a kind of existence proof. Named "TEDIUM," it grew out of the need to reengineer the Oncology Clinical Information System (OCIS) at Johns Hopkins Oncology Center in the early '80s. Rather than struggle to maintain and expand an already unwieldy MUMPS application, Blum decided to build an environment to support the software process itself. Although TEDIUM was created with interactive information systems as its target application type, Blum believes the principles involved are applicable to any type of application; TEDIUM, itself, was developed in TEDIUM.
TEDIUM is built around a core representation scheme that is domain-specific, enabling the TEDIUM programmer (make that "designer") to directly model solutions from the end-user point of view. Code (MUMPS code, in the case of OCIS) is then generated automatically by the environment from this model. As Blum would say, the designers "sculpt" the response of the system, without worrying about implementation details. In this sense, TEDIUM generalizes what visual programming environments have done for GUI building, though the elements of the TEDIUM representation scheme need not be visual. His key point is that, in this type of environment, an iterative design process becomes both natural and graceful. The need for a stable requirements spec vanishes. Indeed, even the need for modular programming vanishes. In the final chapter of the book, Blum documents impressive statistics on the effectiveness of this adaptive design approach.
Well, that's the big picture. Unfortunately, in Beyond Programming, this big picture is but part of an even bigger picture: Blum (by his own admission) spends the first ten of twelve chapters introducing these ideas with a vertiginous survey of the philosophy of science, the nature of technology, highlights of modern philosophy, AI, human problem solving, Soviet psychology-well, you get the idea. With roughly 600 citations, much of the book reads like Cliff Notes, and a likely title revision would be "Beyond TEDIUM." Ironically, Blum neglects to cover recent developments in the field in which he is undeniably an expert: Projects such as the European Union's HELIOS SEE for medical informatics, and NASA's Knowledge-Based SEE espouse similar ideas. It would have been fascinating to get Blum's perspective on these and other current efforts to reengineer software design. Instead, Blum is at pains to convince us that the idea of adaptive design flows out of broader intellectual currents of our time; but this strategy of arguing from general principles has produced a rather arid and superficial book. TEDIUM, on the other hand, has produced a stable, million-line application that is regarded as life-critical in the hospital, yet is continuously evolving at a rate limited only by the willingness of end users to accommodate changes.