Re: Searle: Minds, Brains and Programs

From: Patel Krupali (kp898@ecs.soton.ac.uk)
Date: Wed May 30 2001 - 18:40:33 BST


On Hosier on Searle, John. R.: Minds, Brains and Programs (1980)
http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

> Hosier:
> Searle's CRA can be briefly described as follows: Suppose that an
> exclusively English speaking person is locked in a room and then
> given a set of rules for responding to Chinese script. Now suppose
> that by following these rules the person can take as input, Chinese
> writing and give as output Turing indistinguishable responses i.e.
> The person can read and write Chinese. (N.B. This is not to say
> that the person understands what he is doing.)

> This argument seems flawed only in the respect that although a
> program could be made to give the appropriate symbolic responses to
> some symbol inputs surely there can be no possible program that
> could give adequate responses to all symbol inputs. For instance,
> humans have trouble answering 'what is the meaning of life', as
> would any AI solution. However more fundamentally than this it
> seems obvious that a symbol responding system could not just be a
> set of rules. The system would have to include some kind of
> experience 'history' so that questions based on previous questions
> could be answered. As well as this it would seem to need some kind
> of actual symbol grounding within the physical world, so that
> actual 'meaning' could be attached to the input and output symbols.
> It is this 'meaning' that Searle suggests is lacking from any
> computational AI system.

Patel:
Within the Chinese Room the fact that the individual presents his
linguistic behaviour in an indiscriminate way so that we think it is
one who speaks Chinese, does not itself demonstrate that the simulator
understands Chinese. It must be stresses that the simulators results
are entirely based on his or her understanding of English, for it is
the instructions provided in that language which facilitate the
simulation.

From what sense of the word understand the simulations produced do not
Status: RO

mean that there Is an understanding of Chinese.

Similarly Turings own mathematical achievements were a method for
breaking down calculations into such remarkably simplified steps so
that they could be carried out according to a series of instructions
whose Following did not involve any understanding of the mathematical
operations being carried out.

Therefore simple operations could be carried out even if they were not
understood even though I think he fails to understand this was their
character, assuming instead that when people carry out computations
this involves understanding. Then if computing can be done on machines
then those machines must be capable of understanding! However if the
Chinese room or mechanical computation of any kind is to work then some
element of understanding must be involved.

The Chinese room situation is designed so that there is no
understanding of Chinese and all the instructions are in English.
Everything is written in English completely.

I think the room you mention above is the system which operates on the
basis of English, and there is no understanding of Chinese at all. I
argue this point and say that there is an element of understanding
here when the setting of the room and in writing the instructions. I
do not see how a person can write instructions in English for the
successful simulation of Chinese linguistic behaviour without a large
detailed understanding of Chinese. So there is understanding involved
but it is that understanding which is expressed in the detail of the
instructions or rules.

Does this mean such instructions possess an understanding of Chinese?



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST