Harnad, S. (1993) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Pp. 68-73.


The Failures of Computationalism

John R. Searle
Department of Philosophy
University of California
Berkeley CA
searle@cogsci.berkeley.edu

The Power in the Chinese Room.

Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let's begin by pondering the implications of the Chinese Room. The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese speaker have that I in the Chinese Room do not have? The answer is obvious. I, in the Chinese room, am manipulating a bunch of formal symbols; but the Chinese speaker has more than symbols, he knows what they mean. That is, in addition to the syntax of Chinese, the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally.

But, once again, why? Why can't I in the Chinese room also have a semantics? Because all I have is a program and a bunch of symbols, and programs are defined syntactically in terms of the manipulation of the symbols. The Chinese room shows what we should have known all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)

Why did the old time computationalists make such an obvious mistake? Part of the answer is that they were confusing epistemology with ontology, they were confusing "How do we know?" with "What it is that we know when we know?" This mistake is enshrined in the Turing Test(TT). Indeed this mistake has dogged the history of cognitive science, but it is important to get clear that the essential foundational question for cognitive science is the ontological one: "In what does cognition consist?" and not the epistemological other minds problem: "How do you know of another system that it has cognition?"

The feature of the Chinese Room that appeals most to Harnad is that by allowing the experimenter to be the entire system it eliminates any "other minds problem". Since I know that I don't understand Chinese, solely in virtue of implementing the program and passing the TT, I therefore know that those conditions are not by themselves sufficient for cognition, regardless of the fact that other systems that satisfy those conditions may behave just as if they did have cognition.

I think this is not the most important feature of the Chinese Room, but so far, there is no real disagreement between Harnad and me. The disagreement comes over what to make of the insight given us by the Chinese Room. Harnad wants to persist in the same line of investigation that the Chinese Room was designed to eliminate and see if he cannot find some version of the computer theory of the mind that is immune to the argument. In short, he wants to keep going with the epistemology and the computation. He wants to get a better version of TT, the TTT. Harnad takes the Other Minds Problem seriously. He takes the Systems Reply seriously, and he wants to answer the Chinese Room Argument by inventing cases where the experimenter can't be the whole system. Because the Chinese Room Argument relies on the multiple realizability of computation, he thinks the way to answer it is to add to computation a feature which is not multiply realizable, transduction. But this move is not based on an independent study of how the brain causes cognition; rather, it is ad hoc and ill motivated, as I will later argue.

I, on the contrary, think that we should not try to improve on the TT or get some improved versions of computationalism. TT was hopeless to start with because it confused sameness of external behavior with sameness of internal processes. No one thinks that because an electric engine produces the same power output as a gas engine, that they must have the same internal states. Why should it be any different with brains and computers? The real message to be found in the Chinese Room is that this whole line of investigation is misconceived in principle and I want now to say why. Because time and space are short, and because my debate with Stevan is a conversation among friends, I will leave out the usual academic hedges and qualifications, and just state my conclusions. (For detailed argument, the reader will have to consult The Rediscovery of the Mind, 1992).

1. The other minds problem is of no special relevance or interest to cognitive science. It is a philosophers' problem exemplifying skepticism in general but it is not a special problem for cognitive science. So we should not waste our time trying to find tests such as TT, TTT, that will addresss the other minds problem. etc.

Cognitive science starts with the fact that humans have cognition and rocks don't. Cognitive scientists do not need to prove this any more than physicists need to prove that the external world exists, or astronomers need to solve Hume's problem of induction before explaining why the sun rises in the East. It is a mistake to look for some test, such as the TT, the TTT, etc., which will "solve the other minds problem" because there is no such problem internal to Cognitive Science in the first place.

2. Epistemological problems are of rather little interest in the actual practice of science, because they always have the same solution: Use any weapon that comes to hand and stick with any weapon that works. The subject matter of cognitive science concerns human beings and not rocks and computers, and we have a large variety of ways for figuring out what human beings are thinking, feeling, knowing, etc.

3. Where the ontology -- as opposed to the epistemology -- of the mind is concerned, behavior is irrelevant. The Chinese Room Argument demonstrated this point. To think that cognitive science is somehow essentially concerned with intelligent behavior is like thinking that physics is essentially a science of meter readings. (Chomsky's example). So we can forget about the TT and TTT, etc. External behavior is one epistemic device among others. Nothing more.

4. Computation has the same role in cognitive science that it has in any other science. It is a useful device for simulating features of the real domain we are studying. Nothing more.

The idea that it might be something more is a mixture of empirical and conceptual confusion. (Again, see The Rediscovery of the Mind for details.)

5. The real domain we are studying includes real, intrinsic cases of mental states and processes, such as perceiving, thinking, remembering, learning, talking, understanding, etc.; and all of these have mental contents. The problem for cognitive science is not symbol grounding, but symbol meaning and symbol content in general. Cognitive science is concerned with the actual thought contents, semantic contents, experiences, etc., that actual human beings have. Either Harnad's "grounding" is to be taken as synonymous with "content", in which case, why use the notion of grounding? Or it isn't, in which case, it's irrelevant.

All of these mental processes - thinking, talking, learning, etc. - are either conscious or potentially so. It is best to think of cognitive science as the science of consciousness in all of its varieties.

6. All cognitive states and processes are caused by lower level neuronal processes in the brain. It is a strict logical consequence of this point that any artifical system capable of causing cognition would have to have the relevant causal powers equal to those of the brain. An artifical brain might do the job using some other medium, some non carbon based system of molecules for example; but, whatever the medium, it must be able to cause what neurons cause. (Compare: airplanes do not have to have feathers in order to fly, but they do have to duplicate the causal power of birds to overcome the force of gravity in the earth's atmosphere.)

So in creating an artifical brain we have two problems, first, anything that does the job has to duplicate and not merely simulate the relevant causal powers of real brains (this is a trivial consequence of the fact that brains do it causally); and second, syntax is not enough by itself enough to do the job (this we know from the Chinese Room).

Because we now know so little about how the brain actually works, it is probably a waste of time at present to try to build an artificial brain that will duplicate the relevant causal powers of real brains. We are almost bound to think that what matters is the behavioral output (such as "robotic capacity") or some other irrelevancy, and I think this is one source of Harnad's TTT.

7. Once you see that external behavior is irrelvant to the ontology of cognition, Harnad's TTT proposal amounts to a piece of speculative neurophysiology. He thinks that if only we had certain kinds of analog tranducers, then those plus computation and connectionist nets would equal the causal powers of real brains. But why does Harnad think that? If you know anything about the brain, the thesis would seem literally incredible. There is nothing per se wrong with speculative neurophysiology, but it needs to have a point. Once you realize that the brain is a very specific kind of biological organ, and that external behavior of the organism is in no way constitutive of the internal cognitive operations, then there seems to be little point to the type of speculative neurophysiology exemplified by the TTT.

Harnad is very anxious to insist that TTT is not refuted by the Chinese Room. Maybe so, but who cares? If it is unmotivated and neurobiologically implausible what point does it have? Its only motivation appears to be a kind of extension of the behaviorism that was implicit in the TT. That is, Harnad persists in supposing that somehow or other behavior (robotic capacity) is ontologically and not merely epistemologically relevant.

8. For reasons that are mysterious to me, Harnad takes the Systems reply seriously. He says, that that in cases where I am not implementing the whole system, then, "as in the Chinese gym, the System Reply would be correct." But he does not tell us how it could possibly be correct. According to the System Reply, though I in the Chinese Room do not understand Chinese or have visual experiences, the whole system understands Chinese, has visual experiences, etc. But the decisive objection to the System Reply is the one I made in 1980: If I in the Chinese Room don't have any way to get from the syntax to the semantics then neither does the whole room; and this is because the room hasn't got any additional way of duplicating the specific causal powers of the Chinese brain that I do not have. And what goes for the room goes for the robot.

In order to justify the System Reply one would have to show a. How the system gets from the syntax to the semantics.

And in order to show that one would have to show b. How the system has the relevant specific internal causal powers of the brain.

Until these two conditions are met, the System Reply is just hand waving.

I believe the only plausibility of the system reply comes from a mistaken analogy. It is a familiar point that a system made of elements may have features caused by the behavior of the elements that are not features of individual elements. Thus the behavior of the H20 molecules causes the system composed of those molecules to be in a liquid state even though no individual molecule is liquid. More to the point, the behavior of neurons can cause a system made of those neurons to be conscious even though no individual neuron is conscious. So why can't it be the same with the computational system? The analogy breaks down at a crucial point. In the other cases the behavior of the elements causes a higher level feature of the system. But the thesis of Strong AI is not that the program elements cause some higher level feature, rather the right program that passes TT is supposed to constitute cognition. It does not cause it as a biproduct. Indeed the symbols in the implemented program don't have any causal powers in addition to those of the implementing medium. The failure of the analogy between computational systems and other systems which really have emergent properties comes from the fact that genuine emergent properties require causal relations between the lower level elements and the higher level emergent property and these causal powers are precisely what is lacking, by definition, in the computational models.

II. Connectionism to the Rescue?

Since connectionism looms large in Harnad's account, and since it has received a great deal of attention lately, I will devote a separate section to it.

How we should assess connectionism depends on which features of which nets are under discussion and which claims are being made. If the claim is that we can simulate, though not duplicate, some interesting properties of brains on connectionist nets, then there could be no Chinese Room style of objections. Such a claim would be a connectionist version of weak AI. But what about a connectionist Strong AI? Can you build a net that actually had, and did not merely simulate, cognition?

This is not the place for a full discussion, but briefly: If you build a net that is molecule for molecule indistinguishable from the net in my skull, then you will have duplicated and not merely simulated a human brain. But if a net is identified purely in terms of its computational properties then we know from familiar results that any such properties can be duplicated by a Universal Turing machine. And Strong AI claims for such computations would be subject to Chinese Room style refutation.

For purposes of the present discussion, the crucial question is: In virtue of what does the notion "same connectionist net" identify an equivalence class. If it is in virtue of computational properties alone then a Strong AI version of connectionism is still subject to the Chinese Room Argument, as Harnad's example of the three rooms illustrates nicely. But if the equivalence class is identified in terms of some electro chemical features of physical architectures, then it becomes an empirical question, one for neurobiology to settle, whether the specific architectural features are such as to duplicate and not merely simulate actual causal powers of actual human brains. But, of course, at present we are a long way from having any nets where such questions could even be in the realm of possibility.

The characteristic mistake in the literature - at least such literature as I am familiar with - is to suggest that because the nets duplicate certain formal properties of the brain that somehow they will thereby duplicate the relevant causal properties. For example, the computations are done in systems that are massively parallel and so operate at several different physical locations simultaneously. The computation is distributed over the whole net and is achieved by summing input signals at nodes according to connection strengths, etc. Now will these and other such neuronally inspired features give us an equivalence class that duplicates the causal powers of actual human neuronal systems? As a claim in neurobiology the idea seems quite out of the question, as you can see if you imagine the same net implemented in the Chinese Gym. Unlike the human brain, there is nothing in the gym that could either constitute or cause mental states and processes.

There is no substitute for going through real examples, so let's take a case where we know a little bit about how the brain works. One of the ways that cocaine works on the brain to produce its effects is that it impedes the capacity of the synaptic receptors to reabsorb competitively a specific neurotransmitter, norepenephrine. So now let us simulate the formal features of this in the Chinese gym, and we can do it to any degree of precision you like. Let messenger boys in the gym simulate molecules of norepenephrine. Let the desks simulate post synaptic and presynaptic receptors. Introduce a bunch of wicked witches to simulate cocaine molecules. Now instead of rushing to the receptors like good neurotransmitters, the boys are pushed away by the wicked cocaine witches so they have to wander aimlessly about the floor of the gym waiting to be reabsorbed. Now will someone try to tell me that this causes the whole gym "as a system" to feel a cocaine high? Or will Harnad tell me that because of the Other Minds Problem and the Systems Reply, that I can't prove that the whole gym isn't feeling a cocaine high? Or will some Strong AI connectionist perhaps tell me we need to build a bigger gym? Neurobiology is a serious scientific discipline, and though still in its infancy it is not to be mocked. The Strong AI version of Connectionism is a mockery, as the Chinese Gym Argument illustrates. Harnad, by the way, misses the point of the Chinese Gym. He thinks it is supposed to answer the systems reply. But that is not the point at all.

The trilemma for Strong AI Connectionism can be stated succinctly: If we define the nets in terms of their computational properties, they are subject to the Chinese Room. Computation is defined syntactically and syntax by itself is not sufficient for mental contents. If we define the nets in terms of their purely formal properties, independently of the physics of their implementation, then they are subject to the Chinese Gym. It is out of the question for empirical reasons that the purely formal properties implemented in any medium whatever should be able to duplicate the quite specific causal powers of neuronal systems. If, finally, we define the nets in terms of specific physical features of their architecture, such as voltage levels and impedance, then we have left the realm of computation and are now doing speculative neurobiology. Existing nets were not even designed with the idea of duplicating the the causally relevant internal neurobiological properties. (Again, does anyone really doubt this?)

HARNAD RESPONSE TO SEARLE:

Agreement is boring. New ideas arise from challenges to current ones. Necessity is the mother of invention. So it is with relief that I see that Searle and I have plenty to disagree about:

Syntax vs. semantics: No, it wasn't obvious at all that a dynamic implementation of a purely syntactic system could not generate semantics. In fact, nothing is obvious in the area of semantics. Grandma "knew" all along that computers couldn't be thinking, Searle can't imagine how a gymful of boys or a robot could be thinking, I can't imagine how a lump of neurons could (and Father O'Grady, with a benign smile, knows they couldn't, if they hadn't had some help). Nothing faintly obvious here. So computationalism was worth a go -- until some of us started thinking about it (in no small measure thanks to Searle 1980), and the rest is still history in the making. But the force of Searle's argument certainly is not that it's obvious that there's no way to get from syntax to semantics; surely there's a bit more to it than that. Besides, I'll wager that as obvious it is that there's no way to get semantics from syntax, it'll be just as obvious that you can't get it from any other candidate that's clearly enough in focus so you can give it a thorough lookover. That's what's called the mind/body problem.

Mind and meaning: Let me state something pretty plainly, something that Searle (e.g. 1990) has only been making tentative gestures toward (although the relevance and credibility of his testimony in the Chinese room -- to the effect that he does not understand Chinese -- relies on it completely): There's only one kind of meaning: the kind that my thoughts have when I think something, say, "the cat is on the mat," and in thinking it, I have something in mind. Put even more bluntly, there's something it's like to think that the cat is on the mat, and the kind of thing that that is like is the essential feature of thinking, and meaning. Take time to mull it over. I'm saying that only minds have meaningful states, and that their meaningfulness is derived entirely from their subjective quality. That's intrinsic semantics. Everything else, if it's interpretable as meaning anything at all, is just extrinsic semantics, derived intentionality, or what have you -- as in the pages of a book, the output of a computerized dictionary or the portent of a celestial configuration. If this is true, it is bad news for "unconscious thoughts," worse news for "unconscious minds" and even worse news for systems in which there is nobody home at all (as opposed to just someone sleeping) -- if such systems nevertheless aspire to have intrinsic semantics. Having extrinsic semantics just means being "interpretable as if it were meaningful" by a system that has intrinsic semantics.

Grounding and meaning: Is there a third possibility? Can something be more than just "interpretable as if it meant X" but less than a thought in a conscious mind? This is the mind-modeller's counterpart of the continuum hypothesis or P=NP, but it is both empirically and logically undecidable. The internal states of a grounded TTT system are not just formally interpretable as if they meant what they mean; the system itself acts in full conformity with the interpretation. Causal interaction with the objects that the symbols are interpretable as being about is not just syntax any more; syntax is just the formal relations among the symbols. But is that enough to guarantee that the semantics are now intrinsic? Or is grounded semantics just a "stronger" form of extrinsic semantics?

Ontology and epistemology: I don't make the ontic/epistemic confusions Searle thinks I make (I am kept too much on my toes pointing them out in others!). I am fully aware that not only the TT, but the TTT and even the TTTT are incapable of guaranteeing the presence of mind, and hence intrinsic meaning. But I'm also aware that the T-hierarchy is not just a series of behavioristic digressions from the correct empirical path, they are the empirical path; in fact, the TTTT exhausts the empirical possibilities. Searle himself is an advocate of the TTTT. He can't imagine settling for less. Yet he admits that we only want relevant TTTT powers. How are we to know which ones those are?

Let's admit that we're doing reverse engineering rather than "basic science" and hope that the constraint of finding out what is needed to make a system that can DO everything the brain can do will allow us to pick out its relevant powers. No guarantees, of course, but worrying too much about the outcome is tantamount to believing that (1) TTT-indistinguishable Zombies could have made it in the world just as successfully as we could, but we just don't happen to be such Zombies (but then how could evolution tell the difference, favoring us, since it's not a mind-reader either?) and (2) the degrees of freedom for successfully building TTT-scale systems are large enough to admit radically different solutions, some Zombies and some not. I think it is more likely that the TTT is just the right relevance filter for the TTTT. Otherwise we're stuck with modelling a lot of what might be irrelevant TTTT properties.

Is the Other-Minds Problem Irrelevant? As a form of skepticism -- worrying because we can't be sure other people have minds -- the other-minds problem is not particularly useful. But it is unavoidable when it comes to empirical work on other organisms, artificial mind-modelling or the brain itself. The question comes up naturally: How are we to ascertain whether or not this system has a mind? There are no guarantees, but there are some "dead end" signs (like the Chinese Room Argument), and, one hopes, some positive guides too, such as the TTT and groundedness. By Searle's lights, there is only one: the TTTT.

Is Transduction Unmotivated "Speculative Neurophysiology": I think there is plenty of evidence that a large portion of the nervous system is devoted to sensory and motor transduction and their multiple internal analog projections (e.g. Chamberlain & Barlow 1982, Jeannerod 1994). Transduction is also motivated a priori by the logical requirements of a TTT robot, the real/virtual robot/world distinction, and immunity to the Chinese Room Argument. Besides, it's no kind of neurophysiology if one's empirical constraint is the TTT rather than the TTTT, as mine is.

A few loose ends: (1) Contrary to Searle's suggestion, there is (of course) a causal connection between the hardware of a machine and the software it is executing; it's just that those physical details are not relevant to the computation, and the causal connection is the wrong kind if a mind was what one was hoping to implement. (I think this is the same conclusion Searle wanted to draw.) (2) The cocaine example is a red herring, because nets are not being proposed as models for pharmacological function but for physiological function. But the gym example continues to be just a caricature rather than an argument. (3) My hybrid grounding program is not committed to computationalism (I would be content to see most of the cognitive groundwork done nonsymbolically), but I do think the internal substrate of language will turn out to have something symbolic about it. Besides, the Chinese Room Argument and the Symbol Grounding Problem show only that cognition can't all be just computation, not that cognition can't be computation at all. On the other hand, it's not clear whether a grounded symbol system, with it's second layer of analog constraints, is still really much of a symbol system, in the formal syntactic sense, at all.
-- S.H.

REFERENCES

Chamberlain, S.C. & Barlow, R.B. (1982) Retinotopic organization of lateral eye input to Limulus brain. Journal of Neurophysiology 48: 505-520.

Fodor, J. & Pylyshyn, Z. (1988) Connectionism and cognitive architecture: A critical analysis. Cognition 28: 3 - 71.

Fodor, J. A. (1975) The language of thought New York Thomas Y. Crowell

Harnad, S. (1987) (ed.) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990a) The Symbol Grounding Problem.

Harnad, S. (1990b) Against Computational Hermeneutics. (Invited commentary on Eric Dietrich's Computationalism) Social Epistemology 4: 167-172.

Harnad, S. (1990c) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.

Harnad, S. (1990d) Symbols and Nets: Cooperation vs. Competition. Review of: S. Pinker and J. Mehler (Eds.) (1988) Connections and Symbols. Connection Science 2: 257-260.

Harnad, S. (1992a) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.

Harnad, S. (1992b) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4) (October) 9 - 10.

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on the Virtual Mind. Minds and Machines (in press)

Jeannerod, M. (1994) The representing brain: neural correlates of motor intention and imagery. Behavioral and Brain Sciences 17(2) in press.

Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: Bradford Books

Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424.

Searle, J.R. (1990) Is the brain's mind a computer program?. Scientific American 262: 26-31.