Re: Searle: Minds, Brains and Programs

From: Bell Simon (smb398@ecs.soton.ac.uk)
Date: Thu May 24 2001 - 15:32:14 BST


Searle, John. R. (1980) Minds, brains, and programs.

Bell:
Searle in this document attempts to show that a computer running
a program can never understand a piece of text, or anything for
that matter in the same way that humans can. He comes to this
conclusion from his Chinese room arguement.

>SEARLE:
>But according to strong AI, the computer is not merely a tool in
>the study of the mind; rather, the appropriately programmed
>computer really is a mind, in the sense that computers given the
>right programs can be literally said to understand and have other
>cognitive states. In strong AI, because the programmed computer
>has cognitive states, the programs are not mere tools that enable
>us to test psychological explanations; rather, the programs are
>themselves the explanations.

Bell:
Searle's initial use of the word 'understand' should be considered
an important definitional matter. He indicates that understanding
involves the presence of 'cognitive states', but this again is very
unclear. If he is to define understanding in terms of, or in
relation to the sense of self, then due to the other minds barrier
cannot be tested for on either machines or people. If understanding
an event is appreciating its relevance and consequences in the
specific domain then it can be described by matching the event to a
previous occurrence and introducing the outcome to the situation.
The rest of Searle's argument is based upon the notion of
understanding as if it an axiom of logic, and it should have been
defined more aggressively.

>SEARLE:
>A man went into a restaurant and ordered a hamburger. When the
>hamburger arrived it was burned to a crisp, and the man stormed
>out of the restaurant angrily, without paying for the hamburger
>or leaving a tip." Now, if you are asked -Did the man eat the
>hamburger?" you will presumably answer, ' No, he did not.'

Bell:
It must be studied why the human response would be 'no', and the
reasons before deciding that a machine could not perform the same
task. The human associates storming out of the restaurant without
paying as an indication of displeasure. Given the situation, the
displeasure is probably due to the burnt hamburger. We presume that
the man did not like burnt hamburgers (as we probably do not), and
so conclude that he did not eat the burger. This is not a fantastic
result for anyone to come to; it was just putting personal
experience together with the most likely reasons. Put the same
question to someone who had no notion of hamburgers, paying methods
or restaurants etc then they would not know whether the man had
probably eaten the burger or not. The example shows the application
of knowledge, not that a machine cannot understand the situation.

>SEARLE:
>From the external point of view -- from the point of view of
>someone reading my "answers" -- the answers to the Chinese
>questions and the English questions are equally good. But in the
>Chinese case, unlike the English case, I produce the answers by
>manipulating uninterpreted formal symbols. As far as the Chinese
>is concerned, I simply behave like a computer; I perform
>computational operations on formally specified elements. For the
>purposes of the Chinese, I am simply an instantiation of the
>computer program.

Bell:
Searle must not confuse executing the program and being an
instantiation of the program. Searle as a man is the abstract
hardware upon which the program is run. The fact that he has
remembered the program is irrelevant; this demonstrates that in
AI the program and hardware are separate.

>SEARLE:
>Now the claims made by strong AI are that the programmed computer
>understands the stories and that the program in some sense explains
>human understanding.

Bell:
The understanding, or lack of it, is not within the whole scope of
the computer. The characteristics of a computer program do not
originate from the whole computer, but the algorithms that it
executes. A program can be executed upon multiple computers with
identical results. The hardware is only an aid to manipulate the
algorithms and has no importance in itself.

>SEARLE:
>As regards the first claim, it seems to me quite obvious in the
>example that I do not understand a word of the Chinese stories. I
>have inputs and outputs that are indistinguishable from those of
>the native Chinese speaker, and I can have any formal program you
>like, but I still understand nothing. For the same reasons,
>Schank's computer understands nothing of any stories. whether in
>Chinese. English. or whatever. since in the Chinese case the
>computer is me. and in cases where the computer is not me, the
>computer has nothing more than I have in the case where I
>understand nothing.

Bell:
The fact that Searle does not understand Chinese demonstrates the
notion that the hardware is abstract. It is not suffice to state
that Searle is the system, and so can make this observation. The
part of Searle that declares his own ignorance is not the whole
system, but instead only the part that processes the algorithms.
Searle cannot comment upon the algorithm's level of cognition as by
his own words he is performing uninterpreted symbol manipulation.

>SEARLE:
>My response to the systems theory is quite simple: let the
>individual internalize all of these elements of the system. He
>memorizes the rules in the ledger and the data banks of Chinese
>symbols, and he does all the calculations in his head. The
>individual then incorporates the entire system. There isn't
>anything at all to the system that he does not encompass. We can
>even get rid of the room and suppose he works outdoors. All the
>same, he understands nothing of the Chinese, and a fortiori
>neither does the system, because there isn't anything in the
>system that isn't in him. If he doesn't understand, then there is
>no way the system could understand because the system is just a
>part of him.

Bell:
Searle's response to the systems reply shows that he does not
fully understand what it is saying. It is irrelevant where the
written algorithms are physically positioned; in a room, on some
paper, or in his head. The fact that he can count in his head does
not have an impact upon the fact that Searle is logically separate
from the algorithms. Physically the entire system is within
Searle's body, but that does not mean that for the algorithms to
demonstrate understanding Searle also does.

>SEARLE:
>Whereas the English subsystem knows that "hamburgers" refers to
>hamburgers, the Chinese subsystem knows only that "squiggle
>squiggle" is followed by "squoggle squoggle." All he knows is that
>various formal symbols are being introduced at one end and
>manipulated according to rules written in English, and other
>symbols are going out at the other end.

Bell:
Searle seems to believe that a symbol manipulation system would have
to function upon a basic premise that each symbol would represent
one thing in the real world, such as a hamburger etc. In his eyes it
therefore follows that the system would have no notion of what the
symbol stood for (a hamburger) and so shows that symbol manipulation
cannot be the correct story. He does not seem to appreciate the
complexity of the brain, and so the symbol system if it
existed. A neuron in the brain does not 'understand' what the
neural signal refers to when it processes it. This is
irrelevant.

>SEARLE:
>McCarthy, for example, writes, '-Machines as simple as thermostats
>can be said to have beliefs, and having beliefs seems to be a
>characteristic of most machines capable of problem solving
>performance" (McCarthy 1979). Anyone who thinks strong AI has a
>chance as a theory of the mind ought to ponder the implications of
>that remark. We are asked to accept it as a discovery of strong AI
>that the hunk of metal on the wall that we use to regulate the
>temperature has beliefs in exactly the same sense that we, our
>spouses, and our children have beliefs, and furthermore that
>"most" of the other machines in the room -- telephone, tape
>recorder, adding machine, electric light switch, -- also have
>beliefs in this literal sense. One gets the impression that
>people in AI who write this sort of thing think they can get away
>with it.

Bell:
I believe that Searle and McCarthy are using different definitions
of the word 'belief'. It is only an ambiguous descriptive word,
Searle should not worry about his thermostat starting to preach
Buddhism.

>SEARLE:
>The first thing to notice about the robot reply is that it tacitly
>concedes that cognition is not solely a matter of formal symbol
>manipulation, since this reply adds a set of causal relation with
>the outside world.

Bell:
The fact that a computer is put within a robot and is fed video data
instead of text is totally irrelevant, and has no impact upon
whether cognition is a matter of symbol manipulation or not. The
brain has only one method of passing information in and out; that
of neural impulses which could be coded as symbols: Two would
suffice for a signal or lack of it. Whether the brain is
attached to a text reader or an eye, and whether the brain is
locked in a room or is free to roam the world has absolutely
nothing to do with whether is can potentially understand something
or if it cannot.

>SEARLE:
>The problem with the brain simulator is that it is simulating the
>wrong things about the brain. As long as it simulates only the
>formal structure of the sequence of neuron firings at the synapses,
>it won't have simulated what matters about the brain, namely its
>causal properties, its ability to produce intentional states.

Bell:
It is clear from this that Searle does not believe that the brain
gets it's characteristics from it's physical state, or not in any
way that could be simulated. His reason however is not fully
explained in a satisfactory manner, purely stating that it is
an "absurd view" to think that it could be simulated.

>SEARLE:
>Now the computer can pass the behavioural tests as well as they can
>(in principle), so if you are going to attribute cognition to
>other people you must in principle also attribute it to computers.
>This objection really is only worth a short reply. The problem in
>this discussion is not about how I know that other people have
>cognitive states, but rather what it is that I am attributing to
>them when I attribute cognitive states to them.

Bell:
The point of the other minds argument is that we believe that our
consciousness is something special and so surely cannot be
explained away by computation, as this would mean that we do not
have a 'free will', but only think that we do. There is a perceived
decoupling between consciousness and the physical brain, which
computation would violate.

>SEARLE:
>The many mansions reply (Berkeley). "Your whole argument
>presupposes that AI is only about analogue and digital computers.
>But that just happens to be the present state of technology.
>Whatever these causal processes are that you say are essential for
>intentionality (assuming you are right), eventually we will be
>able to build devices that have these causal processes, and that
>will be artificial intelligence.

Bell:
The result of the Church-Turing thesis is that anything which is
possibly computable can be computed through using a Turing complient
machine. Therefore if the many mansions reply was correct, then it
would have nothing to do with computation. This argument is
therefore irrelevant to this discussion.

>SEARLE:
>No one supposes that computer simulations of a five-alarm fire
>will burn the neighbourhood down or that a computer simulation of a
>rainstorm will leave us all drenched. Why on earth would anyone
>suppose that a computer simulation of understanding actually
>understood anything?

Bell:
The fact that the simulated fire does not burn down the room is not
a relevant analogy. Within the simulation the fire would be real;
the simulated entities in the program would react as such. Whether
we would wish to describe this as real is a definitional matter
and is of no consequence. There is a domain within the program
where there is a fire. The case is the same with Searle attempting
to execute the Chinese room program; the intelligence is
present in the domain of the algorithms. Searle as a person is
external from this process, and can only look in at the altering
state of the algorithms. When I think of something, it is not
observable from outside viewing either. Searle's room problem is
the other minds barrier in force.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST