PROGRAMMING PARADIGMS

Mind and Life as Mechanism

Michael Swaine

From time to time in this space I attempt to justify to myself my four years of undergraduate and three years of graduate study of the human mind. I suppose I should be satisfied with the deep insights into human nature that my education has given me, if only I knew what they were. Or maybe I should just put it all behind me as the transient obsession of a youth wasted hanging out in coffeehouses among poets and social workers. After all, Jerry Pournelle actually finished a doctorate in psychology and doesn't feel the need to inject the stuff into his Byte columns.

But I do feel the need to inject the stuff here, and in the past, I have injected several critiques of books on what could be called the "mechanical model of the mind." I'm at it again this month, although this time I offer two points in my defense: 1. The specific model I discuss here was concocted by a real computer scientist who has implemented at least a part of it in a real computer program; and 2. toward the end of the column I execute a slippery segue into a completely different subject, a discussion of a real commercial computer program that embodies an interesting programming paradigm or two.

The Mind is Software

The mechanical model is simply this: The brain is a computer, and the mind is its software.

To many people, this notion is unpalatable. Certainly, anyone with traditional religious beliefs (almost any tradition) should be uncomfortable with it. But it's possible to be skeptical about the mechanical model and also to be agnostic with respect to any religious beliefs about the mind. Many nonreligious psychologists and philosophers, and some computer scientists, are mechanical-model skeptics.

What isn't so easy, apparently, is to come up with an alternative model of the mind that has an equivalent level of scientific rigor. A lot of the critics of the mechanical model (including Hubert Dreyfus and John Searle) only attack it, without offering any model of their own.

There are exceptions. In The Emperor's New Mind (Oxford University Press, 1989), Roger Penrose has presented an approach that rests ultimately on quantum uncertainty. Penrose's approach is brave, because it's open to easy ridicule: Drawing on quantum uncertainty to explain the workings of the mind can seem like an act of scientific desperation. I've discussed Penrose's approach here before, and my view is more generous: I'm willing to believe that the questions we want to ask about the mind may be the sort of questions that can only be answered by extraordinary, credibility-challenging answers. Maybe. But Penrose has the burden of demonstrating that his theory has any clear scientific advantage over the simpler, mechanical model. It's not clear that he can do it.

Enter David Gelernter

Like Penrose, David Gelernter is a mechanical-model skeptic. He has also presented a model of the mind that challenges the mechanical model in his The Muse in the Machine (The Free Press, 1994). But Gelernter's model does its work without also challenging credibility. And Gelernter is not a psychologist or a philosopher, but a computer scientist, the inventor of the distributed programming language Linda, and a leading light in programming for parallel architectures. And Gelernter is actually building a program that embodies his model of the mind.

The Turing Test is a Black Hole

It is arguable that this mechanical model is not really a theory so much as a choice of research instruments. Acting as though the mind were the brain's software lets psychologists use the computer as a tool for doing research into mental processes, and they've been doing that for more than 20 years. In Human Associative Memory (V.H. Winston & Sons, 1973), psychologists John Anderson and Gordon Bower indicated how widespread, and how useful, computer simulations had already become in psychological theory:

The various neo-associationist theories of memory..., including our own, have been cast in the form of computer simulation models.... This is no accident. The task of computer simulation simultaneously forces one to consider both whether his theory is sufficient for the task domain to be simulated and also whether it can deal with the particular trends found in particular experiments.

But these guys are talking about particular simulations of particular aspects of the mind. If we consider the mechanical model itself as a theory, is it really specific enough to generate any testable predictions?

Testing the mechanical model does seem to present some problems. A lot of the questions we want to ask about the mind don't immediately lead to critical experiments that could demolish the model as a theory of mental organization. The really interesting questions are often as vague as they are interesting. And so, somehow, attempts to test assertions from some mechanical model regarding the workings of the mind often lead to some sort of Turing test, and the Turing test never proves anything of scientific interest.

Here's how all questions about the mind tend to get sucked into the Turing test, as though it were a black hole, whenever they approach the mechanical model:

There's precious little that we might consider the mind capable of doing that we can't convince ourselves that software can also do, in principle. The mind doesn't do any better with uncomputable problems than a computer does. And if a mind or a computer program fails to solve a computable problem, it's arguable that the failure was a practical one having to do with available resources (including time) rather than a fundamental limitation.

So the question morphs into one of not whether but how problems are solved. Certainly the mind doesn't do math, for example, the way Mathematica does. But a more meaningful question is, "Can we write a program that does math the way the mind does?"

But that's basically a programming challenge. Are you a good enough programmer to write a program that simulates some aspect of the operation of the human mind sufficiently well to meet some kind of Turing test?

Problem Solving is not the Problem

Some would argue--Gelernter for one--that this is simply not the point.

It is a common view that there are two modes of thought, Gelernter says: the rational, problem-solving, goal-directed mode, and the creative, intuitive, emotional mode. Gelernter spends much of his book describing these two modes of thought. He calls the rational mode "high-focus," and the emotional mode "low-focus," for a reason I'll explain momentarily.

All existing computer models of the mind, he argues, tackle only the high-focus mode. The reason that research questions about the mind get sucked into the Turing test is that high-focus thinking places such emphasis on problem solving. Can the mind solve such-and-such a problem? How does the mind solve such-and-such a problem?

But a lot of thought is not problem solving. Particularly low-focus thought.

So should we consider a model of this other mode of thinking? A low-focus model, or perhaps a dual-mode model? Some (such as psychologist Endel Tulving) have proposed this, but Gelernter thinks it's a bad idea.

Gelernter thinks that what we've got is not really two discrete modes of thought, but a continuum. This is, in fact, his central thesis. He calls it the "continuum focus," and uses the terms high- and low-focus to describe the ends of the continuum. By focus he doesn't mean focus of attention, although that's close to his meaning. Instead he's talking about how detailed your perception is. High-focus thought looks at aspects or attributes of a scene or a phenomenon or a memory. For high-focus thought, the usual sorts of associative models make sense: Things get recorded in memory and are retrieved from memory on the basis of their attributes. Connections get made on the basis of attributes. Things that have many attributes in common are more likely to call each other up from memory. Chains of thought will tend to be made up of ideas that are close in meaning, in the sense that they share many attributes. For high-focus thought, the usual associative mechanical models are more or less the correct story.

Feelings are the Glue of Thought

But then there's low-focus thought. Here, entire scenes get stored away in memory, uninterpreted, with trivial details and coincidentally occurring but logically unrelated events getting as much importance as crucial attributes. Even the feeling you were experiencing when you laid down a memory trace gets stored away with it. Low-focus thought deals with information in a different form: as a large, uninterpreted chunk of perception. If it were a data type, it would be a BLOB (binary large object). Low-focus thoughts don't have addressable attributes.

This kind of memory storage clearly requires a different kind of retrieval mechanism. If high-focus thoughts are retrieved on the basis of their attributes, that won't work for low-focus thoughts, which are stored as uninterpreted BLOBs. If they are to be retrieved at all, and if there is to be any way of associating one with another, they need to be tagged in some way.

Gelernter proposes feelings as the tagging mechanism. The emotional state you were in when you experienced the event or thought gets attached to its representation in memory. No internal detail is accessible. Only the emotion is available to use for retrieval or for associating such memories. So two low-focus thoughts that have the same emotional tag have something in common. One can call up another. If you are now in emotional state X, it is easier to access memories that have state X as their emotional tag, which is to say, memories of thoughts or events that occurred when you were also in emotional state X. Emotion is the glue for low-focus thoughts.

Gelernter makes clear that he's not just talking about the kinds of emotional states for which we have common adjectives: happy, sad, jealous, disgruntled. He's imagining a much richer and subtler palette of feelings. The feeling of satisfaction from solving a problem. The different satisfaction of hitting a nail squarely. The still-different satisfaction that comes at the end of a big sneeze. Gelernter is thinking in terms of a whole lot of distinguishable feelings.

He is also thinking in terms of this continuum. Thoughts are not typically one or the other, high focus or low focus; most thoughts are somewhere in between. Focus is a continuum, Gelernter claims.

Gelernter presents many examples designed to show that focus must be a continuum, but he does something else as well. He argues that you can see the continuum in the development of the individual, with children operating in a more low-focus mode than adults, and in the development of the human mind over the centuries, with early texts showing a low-focus worldview. This last point becomes important when he draws upon ancient texts for support, something I'll get into shortly.

And Gelernter makes another interesting claim about low-focus thought and feelings, a claim that makes him a skeptic about the mechanical model. We can't model low-focus thought as the software that runs on the computer that is the brain, he says, because feelings do not reside strictly in the brain. How we feel is as much a function of glandular secretions and other bodily states as it is of brain states. The mind doesn't live in the brain, it lives in the body as a whole.

The Feeling Program

Having claimed that a computer model is impossible, Gelernter proceeds to build one. Here's how he does it: His computer model cheats. He freely admits this fact. What he does is feed ready-made emotions into the program. He tells it how it feels about things, bypassing the need for a body to resolve these matters.

Gelernter's program is called "FGP," short for its primary operations: Fetch, Generalize, and Project. It embodies the kind of memory storage and retrieval that his low-focus and high-focus memories require. It's still in early development, so there are not a lot of results to report, apparently.

A Critique of Pure Feeling

But Gelernter's program does not appear to be equipped to test Gelernter's theory.

It's a theory that presents some difficulties in terms of testing. For one thing, there are all those different emotions. Unless Gelernter presents a model for how emotions are generated or classified or related to one another, these emotional states are just so many independent variables in the theory. Too many independent variables for the theory to be testable, I'd think. And he doesn't present a model for these emotions, unless the single-dimension numbers he uses in one example are to be taken seriously.

There are a number of other unanswered questions about emotions.

Given some satisfactory answers to these questions, does Gelernter present any means for testing his claim that emotion is the glue for thoughts? It should be testable--but he doesn't present any test of it.

Gelernter's central thesis is this idea of a continuum, but it may not be as easy as Gelernter thinks to distinguish between a model involving two processes and one involving a continuum. Note that the normal distribution, a continuous model, does a fine job of predicting runs of heads in coin flipping. One specific continuous model versus one specific discrete one, yes, but even if Gelernter had a specific continuous model, he'd have to demolish all reasonable discrete ones to establish support for his continuous one.

To be fair, Gelernter doesn't try to prove anything scientifically about his model in his book. When he gets to the point where you would expect to see results of tests, he launches into a literary exegesis. I confess that this mystifies me.

His analysis of a Biblical story about Abraham and circumcision is way over my head, but his conclusions do seem to hinge on the assumption that we know what emotions would have been stirred centuries ago in the average Jew by the idea of circumcision at birth, as opposed to circumcision at puberty. This exegesis is supposed to support the idea that primitive thought was more emotion-based than present-day thought, and this in turn is supposed to support the idea of a continuum of modes of thought, from emotion based to rational. I don't see how.

This literary approach seems to me capable of "proving" anything. For example, take the Arabian Nights.

The Thousand and One Theories

The most salient aspect of the Arabian Nights is its structure of stories within stories. No commentator on the Nights writes at any length about it without touching on this obvious and seemingly important fact. Stories aren't written this way any more, but in the time of the Arabian Nights, it was common.

What does this blatant difference in the structure of narrative tell us about modes of thought in primitive and modern times? Is it possible that nested narratives are easier to remember than sequentially presented narratives? In a time when stories were passed on via oral tradition, this would have been crucial to the survival of the stories.

Which leads us to postulate the psychological hypothesis that hierarchical structures for things like narratives aid in their recall from memory.

The point is that you can pick any piece of literature at random and do this kind of speculative stuff.

That doesn't mean that it's not useful if followed up on. I actually did a study in graduate school that showed that recall for a certain type of narrative material was better when that material had a hierarchical structure. Sounds more relevant than it was: Since I was looking at the structure within a single narrative rather than a structure, like that of the Arabian Nights, that ties narratives together, my results don't really have anything to do with the Arabian Nights question. Except this: They do show that it is possible to formulate testable conjectures about the way the mind works on the basis of a critical reading of ancient texts.

It seems to me that this is exactly what Gelernter fails to do, and this is why I find his arguments ultimately unconvincing.

Life is Software

Gelernter rejects the mechanical model because emotions are part of the work of the mind, and emotions depend on the whole body rather than just on the brain. So the formulation, "brain=computer and mind=its software" can't be right.

Well, why not just extend the formulation: "body=computer and mind=its software"? This could be called the "mechanical model of life," and looks like the assumption underlying artificial-life research.

If you aren't up to speed on "a-life," a good place to start is with an entertaining tool by Rudy Rucker called Artificial Life Lab, published by the Waite Group.

The Waite Group has been publishing a lot of book-and-disk packages in trendy areas like fractals, morphing, and a-life, and while they're all pretty entertaining, most are of little real interest to developers. This package, based on work that Rucker did while at Autodesk, should be of interest to anyone. Although he doesn't give you a language to work in, he does provide enough technical detail about the a-life productions this program generates to serve as a solid introduction to the subject.

This isn't just cellular automata spreading patterns across a grid. Rucker's Boppers program lets you create colonies of critters, snip their DNA, and fiddle with their sexual habits, muck around with diet and death, and do infinite tweaking of the supplied algorithms. This in addition to watching cellular automata spread patterns across a grid. Rucker's chapter on theory is as clear an introduction to the subject as I've seen.

Oh, and it's a lot of fun.


Copyright © 1994, Dr. Dobb's Journal