Dr. Dobb's Journal September, 2004
Did Marshall McLuhan answer all these questions in one of those cryptic books he wrote back in the 1960s? Is Ray Bradbury going to sue me over the title of this month's column? Is Michael Moore?
A reader recently took me to task for sneakily slipping my political views into this otherwise strictly technological column. He was objecting to the way I introduced sections of a particular installment of this column with an only vaguely relevant "Bushism." I think he was overreacting. As I see it, the unintentionally humorous sayings of the President of the United States are entertainment, not politics.
"But," that same reader could reply, "since 9/11, everything is changed." No he couldn't. Not since the establishment of a corollary to Godwin's Law forbidding the use of that and several other 9/11 memes. (Godwin's Law: "'As a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one.' [O]nce this occurs, that thread is over, and whoever mentioned the Nazis has automatically lost whatever argument was in progress"; The Jargon File).
The online magazine Slate seems all twisted up about Bushisms, too. It recently published a series of "Kerryisms." The editor may have been trying to be fair and balanced, or may have been making a postmodernist joke about jokes, or may have thought that if Bushisms are funny then Kerryisms ought to be, too. Well, they're not. John Kerry is not, as a rule, unintentionally funny. He's unintentionally boring. There's a difference, and it's one you want to keep a sharp eye on if you edit a magazine, even an online one, and don't want readers to think that your points are missing or your plugs are fouled.
Being unintentionally funny or unintentionally boring doesn't make one a bad person or even a bad President, but only one of them makes for entertaining reading.
But the reader did have a point: I got more fun out of recounting those Bushisms than I would have if my political views were closer to, say, Karl Rove's. I think that the best way for me to avoid sneakily slipping my political views into the column is to state them baldly once and for all so that readers can filter out the politics. I'll do so at the end of the column. But will you really be able to filter out the politics? Can you ever?
The ways we form judgments and make decisions are awfully subjective and error prone. It's tempting to think that the whole enterprise badly needs the insights of people who spend most of their waking hours producing artifacts that make decisions more or less flawlesslycomputer programmers, for instance. Maybe Real Life just needs a few good algorithms.
I want to share with you, FWIW, some of my recent reading on how people make decisions and how technology might help them make better decisions or at least make their bad decisions more efficiently, and how maybe you can't remove politics from the implementation of such technology.
"I devoutly believe that words ought to be weapons."
Christopher Hitchens, who writes "Fighting Words" for Slate
Many years ago, I came across the engaging little book How to Do Things with Words by J.L. Austin (Oxford University Press, 1962). "Engaging it may be," you may say, "but I don't believe in long engagements."
Touché, I'd say, if you were to say that. I do confess to an annoying habit of recommending books of hernia-inducing hefteither that or books with prose that moves along with all the grace of John Kerry on his voting record or George Bush on a bicycle.
How to Do Things with Words is academic and archaic, but it is also short. Another argument in its favor is that it is based on Austin's William James Lectures at Harvard, so it was written with a view to keeping undergraduates awake.
In this little (only 166 pages!) book, Austin is challenging what he calls an age-old assumption in philosophy "that to say something...is always and simply to state something." Of course, some utterances are obviously not stating anything: They are questions or commands or exclamations. But Austin isn't talking about them. He means that some utterances that have the form of statements are notor at least are not simplystatements. Sometimes, he says, to say something is to do something.
Getting back to engagements and following that process to its natural conclusion, there is the act of marrying. (Which, this year, is a hot political issue; but then, this year, what isn't?) To say "I will" in the right context is to perform an act: to marry. Say it and you've done it.
Austin calls such verbal acts "performatives" and identifies many of them. Christening. Bequeathing. Betting. Guessing. Rendering a verdict. Promising. Apologizing. Congratulating. Conceding. Authorizing. Warning. Notifying. Ordering someone to do something. Ordering pizza. Naming a child. Challenging someone to a duel. Voting. Vetoing. Repealing. Appointing someone to an office. Declaring war.
These verbal acts have some peculiar attributes that plain old statements don't have. Many of these involve prerequisites that must be satisfied for the speech acts to be considered to have taken place. Saying "I will" doesn't get you married if you're already in that blessed state. (Blessing: another performative.) Others have to do with obligations and other social pressures incumbent on the performance of the speech act. Promising, for example, places one in a system of social obligations and expectationsmarrying even more so.
It may have been necessary for Austin to point out this aspect of language, but I don't think that we would consider his point particularly controversial. It is true that the use of words can perform actions, but this is pretty obvious to most of us. And it is a good thing, too, if you are a professional programmer, because what you do for a living is to perform actions by uttering, or anyway typing, words. Programmers most certainly know How to Do Things with Words.
In fact, one can cast Austin's distinction between performatives and declarative statements in terms of the procedural and declarative programming paradigms. In programming, the procedural seems more fundamental to most people, and the declarative paradigm seems a somewhat roundabout way to perform actions.
Well then, is every use of words a performance of an act? (Beyond the obvious act of uttering the words?) No, Austin doesn't go that far. But others, influenced by the example of computer programming, might be tempted to.
Human Values and the Design of Computer Technology, edited by Bayta Friedman (Cambridge University Press, 1997; ISBN 15758608050), is not a book that even I would call engaging, but it has its moments. At one point, the temperature of the information gets pretty high, as a legendary computer scientist gets into an argument with a Principal Scientist at Xerox PARC and accuses her of imputing sinister intentions to him and implying that he wants to impose regimes of enforced discipline on innocent knowledge workers. Hot stuff.
The legendary computer scientist is Terry Winograd and the magilla causing the dustup is something called "speech act theory," which is the conceptual base of Winograd and Flores's decision-support system, called "The Coordinator." What The Coordinator does is facilitate working together in groups, for example, software projects. Part of the way it works involves defining categories of speech acts, like Promising or Accepting. When communicating via The Coordinator, as I understand it, you don't just chat aimlessly, you organize communications inside these categorical frames of Requests and Promises and Acceptances and Withdrawals and Reneges. All of which are speech actswhat Austin called "performatives." And the software imposes constraints consistent with the kind of speech act you're committing. A Promise puts you in a situation where only certain specific actions are open to you. ("You are in a room; doors lead to the left and right..." Well, not that exactly.)
The connection with Austin is pretty direct: Speech act theory grows out of Austin's work as extended by John Searle. But does it push Austin's views to the extreme of treating all utterances as actions?
The PARC Principle Scientist thinks so. Her name is Lucy Suchman and her point, in essence, seems to be that the categories in terms of which we converse exert a lot of control over the directions our conversations can takea version of linguist Benjamin Whorf's famous hypothesis that language determines thoughtso if you make people converse through a computer system and you define the conversational categories that the system implements, then YOU have a lot of control over the directions the conversations can take. Categories have politics, she says.
Winograd's response boils down to saying that if you're going to build a computer system to deal with any real-world problem, you darned well have to use hard-edged categories that may fail to perfectly capture the messiness of the real-world problem. You don't get far in computer-system design or any rational enterprise until you come up with clearly defined categories. And Lucy Suchman shouldn't be calling him a tyrant for simply being clear.
I think that Winograd misses Suchman's point, although I don't know how useful that point is. She is saying, in part, that human communication is a lot richer than The Coordinator's categories allow for. Could a system for collaborative work allow categories to emerge from the communication process in the group, in some organic way, rather than being imposed from outside? I don't know how you'd do that, but it would be consistent with the point of another recent book, which argues that groups are sometimes not as stupid as we think.
The New Yorker economics editor James Surowiecki's The Wisdom of Crowds: How the Many Are Smarter than the Few (Random House 2004; ISBN 0385503865) asserts that "under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them." He thus challenges the conventional wisdom that crowds are stupid. This got the attention of Slate, The Economist, Wired, Forbes, and so on. His argument also appeals to the Smart Mobs crowd. (For the lowdown on Smart Mobs, see Howard Rheingold's Smart Mobs, Perseus Publishing, 2002; ISBN 0738206083, or http://www.smartmobs.com/.) Howard Rheingold says "smart mobs emerge when communication and computing technologies amplify human talents for cooperation."
But Surowiecki's point is not the same as Rheingold's, and he's not proposing anything as trippy as "emergent group intelligence."
What he is asserting is pretty concrete and testable. Given certain conditions, for certain kinds of decisions, groups can make better decisions than any of their members. And he cites a lot of evidence. So what are the conditions, what are the appropriate kinds of decisions, and what is the evidence?
There are four conditions that must be met:
Given that these conditions are met, Surowiecki claims that groups can be smarter than their members in several broad decision-making contexts:
Groups, however, are not good in situations where skill is required.
The reason that groups can often be better than experts, he says, is that the expert can't have the breadth of viewpoint of the group. So what's his evidence? Well, he's very good at coming up with convincing examples, and they are not merely anecdotes. His examples tend to be specific cases in which the untutored group clearly did beat the educated expert, and did so in a statistically significant way.
One example: On the TV show "Who Wants to Be a Millionaire," contestants can call on experts to help them answer questions or poll the studio audience. In the sample taken, the experts had a 65 percent success rate; the studio audience was right 91 percent of the time. Surowiecki presents so many such examples that you start to think there must be something to his argument.
The problem is selection bias. Are these cases typical? His argument can't be evaluated without controlled experimental studies.
But Surowiecki is arguing a case, not proving it, and that's something worth doing. His conclusion, if it proves to be correct, would be very pleasingly populist.
It would also be very useful. If we know the conditions under which groups are smart, we can structure decision-making situations appropriately, bringing in experts when appropriate and presenting the right kind of questions to the group when that's the best way to go. It could improve decision-support programs like Winograd's. And it could direct us away from building voting machines and toward building systems that help citizens exercise their collective wisdom.
I promised to reveal my political views, so here goes:
No, let's leave it at that.
DDJ