Re: Searle's Chinese Room Argument

From: Edwards, Dave (dpe197@soton.ac.uk)
Date: Wed Mar 15 2000 - 19:56:09 GMT


Searle, John. R. (1980) Minds, brains, and programs. Behavioral and
Brain Sciences 3 (3): 417-457
http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

>Boardman:
>First he divides AI (Artificial Intelligence) into 'strong' and 'weak'
>flavours. Weak being that computers are a very powerful tool to aid studying
>the mind. He gives an example:

> SEARLE:
> For example, it enables us to formulate and test hypotheses in a more
> rigorous and precise fashion.

>Boardman:
>Then on strong AI:
> SEARLE:
> But according to strong AI, the computer is not merely a
> tool in the study of the mind; rather, the appropriately programmed
> computer really is a mind, in the sense that computers given the right
> programs can be literally said to understand and have other cognitive
> states. In strong AI, because the programmed computer has cognitive
> states, the programs are not mere tools that enable us to test
> psychological explanations; rather, the programs are themselves the
> explanations.

>Boardman:
>His Chinese room argument is against strong AI. Which for the remainder
>of his text he claims to shortened to AI.

> SEARLE:
> I will consider the work of Roger Schank and his colleagues at Yale
> (Schank & Abelson 1977), because I am more familiar with it than I am
> with any other similar claims, and because it provides a very clear
> example of the sort of work I wish to examine.

>Boardman:
>He applies his argument to Schank's relatively simple program that simulates
>the ability of the mind to understand stories. In particular it only
>understands stories about restaurants. The understanding was reduced to such
>things as basic food stuffs, food quality's, possible reactions, tips and
>payment of the bill. The program is then asked questions about the story,
>'Was the foodstuff eaten?', to which it gives correct answers.

> SEARLE:
> When the machine is given the story and then asked the question,
> the machine will print out answers of the sort that we would expect
> human beings to give if told similar stories. Partisans of strong
> AI claim that in this question and answer sequence the machine
> is not only simulating a human ability but also
> 1. that the machine can literally be said to understand the story
> and provide the answers to questions, and
> 2. that what the machine and its program do explain the human
> ability to understand the story and answer questions about it.

>Boardman:
>I would be quite surprised if there really were many people stating that a
>program that can pass t1 (the toy level of the Turing Test) would be
>considered to literally understand, perhaps we should consider a program
>that can pass T2 level (40 years of conversation as a pen pal).

Agreed, but perhaps Searle is trying to simplify the T2 test to this story test?

> SEARLE:
> Both claims seem to me to be totally unsupported by Schank's' work, as I
> will attempt to show in what follows.

>Boardman:
>Searle then suggests that one way to find out if the computer running the
>program has a mind is to run that program in another mind and then ask that
>mind. He sets up the scenario where a person (himself) is given a program to
>run, this is instructions written in the persons native language, say
>English. They are also given large batch of writing (script1), a story
>(script2) and questions (script3), which are all in an unknown language, say
>Chinese.

>The person is locked in the room and follows the instructions, which are
>easily understood. There are two sets of instructions which tell the person
>how to match the story to the script, and how to match the questions to the
>story and the script. The person is not aware of these titles given to the
>instructions and scripts as they are merely unknown squiggles in the foreign
>language.

>He then adds a complication:

> SEARLE:
> Now just to complicate the story a little, imagine that these people also
> give me stories in English, which I understand, and they then ask me
> questions in English about these stories, and I give them back answers in
> English. Suppose also that after a while I get so good at following the
> instructions for manipulating the Chinese symbols and the programmers get
> so good at writing the programs that from the external point of view that
> is, from the point of view of somebody outside the room in which I am
> locked -- my answers to the questions are absolutely indistinguishable
> from those of native Chinese speakers. Nobody just looking at my answers
> can tell that I don't speak a word of Chinese.

>Boardman:
>If this were to be converted to T2 (pen-pal) test then story's and questions
>would be combined (into letters), the program would have to remain constant
>(Searle's t1 version gives a new program supplement with each set of
>questions) and answers would also become letters.

>Searle then explains that his answers in both languages are
>indistinguishable from those of native speakers of each language. But from
>the perspective of the Chinese he is:

> SEARLE:
> I am simply an instantiation of the computer program.

Searle is the hardware that the instantiation of the computer program runs on,
he is not the program.

>Boardman:
>Now tackling the AI claims:

> SEARLE:
> Now the claims made by strong AI are that the programmed computer
> understands the stories and that the program in some sense explains human
> understanding. But we are now in a position to examine these claims in
> light of our thought experiment.

>Boardman:
>1. Searle points out that he doesn't understand a word of Chinese even
>though his answers are indistinguishable from native speakers. He suggests
>that this means that Shank's computer understands nothing, which is probably
>the case. But since his thought experiment needed programmers to supplement
>his instructions with some that match the questions to suitable answers, it
>is not testing his understanding of Chinese but the programmers who are
>giving him the instructions. Presumably Shank's computer required
>re-programming for each batch of questions in which case it would be very
>doubtful for it to exhibit any mind like properties.

>If we then apply this argument to a T2 simulation with a single program and
>no further tweaking from programmers, then Searle is assuming that such a
>feat is possible within an order of complexity that it could be internalised
>in a human or executed in a room such that a pen pal would believe it to be
>a mind that it was conversing with. Say your talking about a 40 year T2 test
>if the answers take 2 years for Searle to 'calculate' in his room then the
>human conversing with it is unlikely to consider it to have a mind. If we
>assume that the instructions are simple enough to be executed quickly then
>Searle would soon come to learn the language he is conversing in. His
>understanding wouldn't affect the answers he is giving, they remain
>determined by his instructions.

I think you are being a little pedantic. This is designed to be a simple
thought experiment, and so we can assume that it did not need re-programmming
and it was quick enough, then it could pass T2.

I agree that his understanding will not have any bearing on the answers and so
is irrelevant to the end result.

> SEARLE:
> 2. As regards the second claim, that the program explains human
> understanding, we can see that the computer and its program do not
> provide sufficient conditions of understanding since the computer and the
> program are functioning, and there is no understanding. But does it even
> provide a necessary condition or a significant contribution to
> understanding?

>Boardman:
>Searle then explains that AI supporters claim that when understanding a
>story in English he is doing the same thing as he is when manipulating the
>Chinese symbols. Its just that formal symbol manipulation separates the
>English and Chinese cases. He doesn't consider himself to have disproved
>this but considers it an incredible claim to make.

The hardware (Searle) that the program is running on can have no possibility of
understanding what it is doing, just like our neurons have no understanding
what we are thinking. But the program could have understanding, just like our
minds.

>Boardman:
>Searle then goes on to discuss understanding and whether computational
>operations on formally defined elements are in any way appropriate to
>explain it. He explains that there are clear cases where understanding
>applies and where it doesn't:

> SEARLE:
> and these
> two sorts of cases are all I need for this argument 2 I understand
> stories in English; to a lesser degree I can understand stories in
> French; to a still lesser degree, stories in German; and in Chinese, not
> at all. My car and my adding machine, on the other hand, understand
> nothing: they are not in that line of business. We often attribute "under
> standing" and other cognitive predicates by metaphor and analogy to cars,
> adding machines, and other artefacts, but nothing is proved by such
> attributions. We say, "The door knows when to open because of its
> photoelectric cell," "The adding machine knows how) (understands how to,
> is able) to do addition and subtraction but not division," and "The
> thermostat perceives chances in the temperature."
> ...
> The sense in which an
> automatic door "understands instructions" from its photoelectric cell is
> not at all the sense in which I understand English. If the sense in which
> Schank's programmed computers understand stories is supposed to be the
> metaphorical sense in which the door understands, and not the sense in
> which I understand English, the issue would not be worth discussing.

Can Searle not understand that there are different levels of complexity? The
door can't understand anything because it has not been programmed to. But a
suitably complex program could (not necesserily, will) understand.
See the ant-calculator argument later.

>Boardman:
>He then moves on to some reply's:

> SEARLE:
> I. The systems reply (Berkeley). "While it is true that the individual
> person who is locked in the room does not understand the story, the fact
> is that he is merely part of a whole system, and the system does
> understand the story. The person has a large ledger in front of him in
> which are written the rules, he has a lot of scratch paper and pencils
> for doing calculations, he has 'data banks' of sets of Chinese symbols.
> Now, understanding is not being ascribed to the mere individual; rather
> it is being ascribed to this whole system of which he is a part."

>Boardman:
>Searle's response to this theory is to get the individual to internalise the
>all the elements of the system. This is feasible on a t1 level where
>everything is nice and simple. But to internalise a T2 passing program,
>which would presumably be more complex than any program currently written,
>Imagine trying to internalise and 'invaluably' run a 300 Mb install of any
>Microsoft software. Even trying to internalise a dictionary is the kind of
>thing that only an elite few humans can manage.

Even assuming it is possible for Searle (or anyone) to memorize it, it will
make no difference to the argument. Searle is the hardware and as such can have
no possiblily of understanding what the program is doing. The program could
have understanding, but Searle would not know.

> SEARLE:
> Actually I feel somewhat embarrassed to give even this answer to the
> systems theory because the theory seems to me so implausible to start
> with. The idea is that while a person doesn't understand Chinese, somehow
> the conjunction of that person and bits of paper might understand
> Chinese.

>Boardman:
>Which does seem quite a reasonable argument. He goes on to explain that this
>implies two subsystems, one that understands English and one that
>understands Chinese, its just they don't talk to each other. The English one
>understands "food" to be food an edible substance, but the Chinese one only
>knows that "squiggle squiggle" is followed by "squoggle squoggle".

> SEARLE:
> II. The Robot Reply (Yale). "Suppose we wrote a different kind of program
> from Schank's program. Suppose we put a computer inside a robot, and this
> computer would not just take in formal symbols as input and give out
> formal symbols as output, but rather would actually operate the robot in
> such a way that the robot does something very much like perceiving,
> walking, moving about, hammering nails, eating drinking -- anything you
> like. The robot would, for example have a television camera attached to
> it that enabled it to 'see,' it would have arms and legs that enabled it
> to 'act,' and all of this would be controlled by its computer 'brain.'
> Such a robot would, unlike Schank's computer, have genuine understanding
> and other mental states."

>Boardman:
>This is in effect a kind of T3/T2 suggestion and would require another order
>of magnitude in the programming complexity. Searle now suggests that the
>adding of perceptual and motor capacities adds nothing by way of
>understanding. He changes his thought experiment so that now in addition to
>the Chinese symbols there is a stream of symbols representing the sensory
>data coming from the robot and some of the answers cause the movement of
>the robot. He emphasise that he is still manipulating formal symbols. Now
>timing becomes crucially important because if your robot comes to a stair
>case and continues to walk forwards before you've managed to process the
>visual information representing the stairs then it will fall down them and
>possibly break. I suggest that you would need a multiple Searle's to handle
>a robot by symbol manipulation if it is to be able to react with any
>immediacy to its environment.

I feel that this is a thought experiment and might, one day, be carried out. As
such, the speed of the robot is irrelevant because it is assumed to be fast
enough by the nature that it is a thought experiment not an actual task to be
done today.

Aside from this, i agree that it basically comes down to symbol manipulation,
and so we are back to the original argument.

>Boardman:
>The reply's people have given to Searle are now moving away from Strong AI,
>the further they get from Strong AI the more difficult and tenuous his
>arguments get.

> SEARLE:
> III. The brain simulator reply (Berkeley and M.I.T.). "Suppose we design
> a program that doesn't represent information that we have about the
> world, such as the information in Schank's scripts, but simulates the
> actual sequence of neuron firings at the synapses of the brain of a
> native Chinese speaker when he understands stories in Chinese and gives
> answers to them. The machine takes in Chinese stories and questions about
> them as input, it simulates the formal structure of actual Chinese
> brains in processing these stories, and it gives out Chinese answers as
> outputs. We can even imagine that the machine operates, not with a single
> serial program, but with a whole set of programs operating in parallel,
> in the manner that actual human brains presumably operate when they
> process natural language. Now surely in such a case we would have to say
> that the machine understood the stories; and if we refuse to say that,
> wouldn't we also have to deny that native Chinese speakers understood the
> stories? At the level of the synapses, what would or could be different
> about the program of the computer and the program of the Chinese brain?"

>Boardman:
>Searle considers this argument to be irrelevant to strong AI, which he
>considers to be the understanding of the mind without doing neurophysiology.

> SEARLE:
> If we had to know how the brain worked to do AI, we wouldn't bother
> with AI.

>Boardman:
>So he then goes on to explain why this still doesn't give the computer
>understanding.

> SEARLE:
> To see this, imagine that instead of a mono lingual man in a room
> shuffling symbols we have the man operate an elaborate set of water pipes
> with valves connecting them. When the man receives the Chinese symbols,
> he looks up in the program, written in English, which valves he has to
> turn on and off. Each water connection corresponds to a synapse in the
> Chinese brain, and the whole system is rigged up so that after doing all
> the right firings, that is after turning on all the right faucets, the
> Chinese answers pop out at the output end of the series of pipes.

>Boardman:
>This is a bit simplistic, the idea that you could construct enough pipes
>appropriately to exactly copy a Chinese mind in such a way that each is
>identifiable and navigable by a man, seems rather improbable. To merely set
>the initial values of all valves would take a few months, if not years.

This is irrelevant, this is a thought experiment and so its the idea that
matters, not the implementation of it and its difficulties.

> SEARLE:
> But the man certainly doesn't
> understand Chinese, and neither do the water pipes, and if we are tempted
> to adopt what I think is the absurd view that somehow the conjunction of
> man and water pipes understands, remember that in principle the man can
> internalise the formal structure of the water pipes and do all the
> "neuron firings" in his imagination.

>Boardman:
>This is like expecting our individual neurons to understand what their
>doing. Also internalising a whole human brain inside another at a biological
>level would be even more difficult than that of a computer program.

I agree with Adam here, Searle is trying to get the hardware to understand, not
the program, which is impossible.

According to computationalism, an AI (or mind) is implementation independant,
so it will not matter whether it is run on a brain, a computer, or even a set
of water pipes.

> SEARLE:
> IV. The combination reply (Berkeley and Stanford). 'While each of the
> previous three replies might not be completely convincing by itself as a
> refutation of the Chinese room counterexample, if you take all three
> together they are collectively much more convincing and even decisive.
> Imagine a robot with a brain-shaped computer lodged in its cranial
> cavity, imagine the computer programmed with all the synapses of a human
> brain, imagine the whole behaviour of the robot is indistinguishable from
> human behaviour, and now think of the whole thing as a unified system and
> not just as a computer with inputs and outputs. Surely in such a case we
> would have to ascribe intentionality to the system. '

>Boardman:
>Searle agrees that in this case it becomes rational and irresistible to
>accept the hypothesis. But claims that it still wouldn't have a mind.

> SEARLE:
> In such a case we would regard the robot as an ingenious mechanical dummy.

>Boardman:
>He also touches on the fact that we ascribe intentionality to animals
>suggesting that's because we cant make sense of the animals behaviour and
>they are made of similar stuff to ourselves.

How far do you have to go before a computer/robot is 'ascribed intentionality'?
Making the components out of neurons? Or will circuits do? What about a
simulation of neurons? That's a computer program, and so is implementation
imdependant.

>Boardman:
>Searle then mentions 'The other minds reply' which states that we can never
>know that another being has a mind but by being the other being, so to
>consider another human to have a mind you must also consider a computer to
>also. Which he counters by saying that we know that simple computational
>processes don't have minds so why should complex ones.

The Ant-Calculator argument:
Does an ant have a mind? It is made of the same stuff as ours, and we assume we
have a mind. I think we all agree that a calculator does not have a mind. But,
just as a more complicated ant's brain (a human brain) has a mind, why can't a
more complicated calculator (a computer) have a mind?

>The final reply he has no problem with 'The many mansions reply' which
>redefines strong AI to be whatever artificially produces and explains
>cognition. This causes his objections to "no longer apply because there is
>no longer a testable hypothesis for them to apply to".

> SEARLE:
> Let us now return to the question I promised I would try to answer:
> granted that in my original example I understand the English and I do not
> understand the Chinese, and granted therefore that the machine doesn't
> understand either English or Chinese, still there must be something about
> me that makes it the case that I understand English and a corresponding
> something lacking in me that makes it the case that I fail to understand
> Chinese. Now why couldn't we give those somethings, whatever they are, to
> a machine?

>Boardman:
>Searle has no reason why we should not be able to do this, but we don't know
>what it is and he believes it cannot be defined in terms of a computer
>program.

Imagine a program which exactly models a human brain (to the degree that
matters, atoms perhaps). This program then runs an AI program. Everyhing that a
human brain has, the simulated brain has. Can there be anything missing? If is
a physical item (eg. eyes, ears) then include them in the model as well.

> SEARLE:
> But the main point of the present argument is that no purely formal model
> will ever be sufficient by itself for intentionality because the formal
> properties are not by themselves constitutive of intentionality, and they
> have by themselves no causal powers except the power, when instantiated,
> to produce the next stage of the formalism when the machine is running.
> And any other causal properties that particular realizations of the
> formal model have, are irrelevant to the formal model because we can
> always put the same formal model in a different realization where those
> causal properties are obviously absent. Even if, by some miracle Chinese
> speakers exactly realize Schank's program, we can put the same program in
> English speakers, water pipes, or computers, none of which understand
> Chinese, the program notwithstanding.

>Boardman:
>Searle then goes on to clarify whether machines can think?

> SEARLE:
> What matters about brain operations is not the formal shadow cast by the
> sequence of synapses but rather the actual properties of the sequences.
> All the arguments for the strong version of artificial intelligence that
> I have seen insist on drawing an outline around the shadows cast by
> cognition and then claiming that the shadows are the real thing. By way
> of concluding I want to try to state some of the general philosophical
> points implicit in the argument. For clarity I will try to do it in a
> question and answer fashion, and I begin with that old chestnut of a
> question:
> "Could a machine think?"
> The answer is, obviously, yes. We are precisely such machines.
> "Yes, but could an artefact, a man-made machine think?"
> Assuming it is possible to produce artificially a machine with a nervous
> system, neurons with axons and dendrites, and all the rest of it,
> sufficiently like ours, again the answer to the question seems to be
> obviously, yes.

How about a program which models these parts? It will still be a program, not
hardware.

> SEARLE:
> If you can exactly duplicate the causes, you could
> duplicate the effects. And indeed it might be possible to produce
> consciousness, intentionality, and all the rest of it using some other
> sorts of chemical principles than those that human beings use. It is, as
> I said, an empirical question.
> "OK, but could a digital computer think?"
> If by "digital computer" we mean anything at all that has a level of
> description where it can correctly be described as the instantiation of a
> computer program, then again the answer is, of course, yes, since we are
> the instantiations of any number of computer programs, and we can think.

Searle has just admitted that a set of computer programs can think. A human is
a 'digital computer', so is a computer and so can run these programs, and thus
can think.

> SEARLE:
> "But could something think, understand, and so on solely in virtue of
> being a computer with the right sort of program? Could instantiating a
> program, the right program of course, by itself be a sufficient condition
> of understanding?"
> This I think is the right question to ask, though it is usually confused
> with one or more of the earlier questions, and the answer to it is no.
> "Why not?"
> Because the formal symbol manipulations by themselves don't have any
> intentionality; they are quite meaningless; they aren't even symbol
> manipulations, since the symbols don't symbolize anything. In the
> linguistic jargon, they have only a syntax but no semantics. Such
> intentionality as computers appear to have is solely in the minds of
> those who program them and those who use them, those who send in the
> input and those who interpret the output.

>Boardman:
>This sounds pretty good the symbols are ungrounded, the best way of
>enabling the computer to understand the meaning of its symbols is to do it
>the human way, learning. Get your computer to evolve and learn, start it as
>an amoeba and work its way up, wouldn't it then have a mind?

Yes, i agree that this method could also work.

>Searle goes on to make some interesting points

> SEARLE:
> The distinction between the program and
> its realization in the hardware seems to be parallel to the distinction
> between the level of mental operations and the level of brain operations.
> ...
> Stones, toilet paper, wind,
> and water pipes are the wrong kind of stuff to have intentionality in the
> first place -- only something that has the same causal powers as brains
> can have intentionality -- and though the English speaker has the right
> kind of stuff for intentionality you can easily see that he doesn't get
> any extra intentionality by memorizing the program, since memorizing it
> won't teach him Chinese.

>Boardman:
>It seems entirely plausible that the act of memorising such a program might
>well teach one Chinese, or at least that to do with restaurant story's.

I disagree, could you understand how a calculator works by memorising it's
machine code? No.

> SEARLE:
> The idea that computer
> simulations could be the real thing ought to have seemed suspicious in
> the first place because the computer isn't confined to simulating mental
> operations, by any means. No one supposes that computer simulations of a
> five-alarm fire will burn the neighbourhood down or that a computer
> simulation of a rainstorm will leave us all drenched. Why on earth would
> anyone suppose that a computer simulation of understanding actually
> understood anything?

A rainstorm is wet. A simulated rainstorm has simulated wetness. A simulated
person will get a simulated drenching. A simulated rainstorm is not supposed to
be really wet, that's not the purpose of it. If you wanted to create a
rainstorm, then make a duplication of it - buckets of water?

>Boardman:
>A computer simulation of understanding doesn't need to understand anything
>to be useful so why should it!

>So to conclude Searle's arguments demonstrate that a t1 passing program can
>not be said to have a mind, and that T2 and T3 passing programs depending on
>how they are written also don't have minds. However I think that some of his
>arguments are doubtful and that a T2 or T3 passing program that has evolved
>and had a 'growing up' and education could have minds with intentionality
>and causality.

>Some of my main objections to his arguments are:

>Why a 'story comprehension' simulating program could possibly be expected
>to have a mind, other animals we consider to have minds don't understand
>story's.

>Expecting a human to internalise a computer program to perform any complex
>task without coming to some internal understanding of the task, such as
>learning Chinese.

My argument above again (the one with machine code).

>Boardman:
>Expecting a simulation in a computer (with current hardware) or human to be
>able to run at a speed such that its responses are still valid to the
>environment its based inside. Take for example a new born baby that is too
>slow at breathing to get enough oxygen to live, doctors would try putting it
>on a ventilator and drip feeding it. If it still remained un-responsive by
>an order of magnitude grater than has been seen before then after a while
>the baby would be considered dead and the life supporting machinery turned
>off. Their was life, but just not quick enough for us to recognise it as
>such.

Interesting idea, but i didn't think we were conserned with speed. This is all
hypothetical, and thus would run on a hypothetical machine which is going to be
fast enough.

According to Searle, a brain has something a computer doesn't. Why can't a
computer just simulate the brain. It would not be missing anything then, would
it?

Edwards, Dave



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT