Re: Searle's Chinese Room Argument

From: Harnad, Stevan (harnad@cogsci.soton.ac.uk)
Date: Tue Feb 18 1997 - 16:44:36 GMT


> From: Cherry, Sandra <sc1396@soton.ac.uk>
>
> This is what I have understood from my readings. Searle implies that a
> programme for computing Chinese is made up of symbols, which do not have
> any meaning. Hence this was how he himself is able to communicate in
> Chinese, with no knowledge of this language.

Searle tried to show that being in the mental state of "understanding
what someone is writing to you" is not = being a computational state in
a computer that can read and write letters just as if it understood
them.

> Harnad replied using the Symbol Grounding Problem. "Symbols have to be
> connected to the real world" to give us meanings, in order for anyone
> to understand them. Using his example of a horse & stripes = zebra,
> hence, these symbols mean something specific.

Searle showed that to understand a language cannot be just to be
a system that is running the right programme. Programmes manipulate
symbols according to rules such as "1 + 1 = ?" "Substitute "2"
for "?"." The computer can do this just on the basis of the shapes of the
symbols and the rules (algorithms) that its programme makes it follow.
We could follow the same rules without understanding what the numbers
and addition MEAN. So symbol manipulation is not the same as symbol
understanding.

What might symbol understanding be? Well one possibility might be
this: Symbols stand for things. "Cat" stands for those furry little
creatures. So maybe if instead of having a computer, which only
manipulates symbols, you had instead a robot, one that could not only
manipulate symbols so you think it understands them, but it could also
pick out and name and describe all the things in the world that its
symbols stand for. Whatever gave it that ability would "ground" its
symbols, which are otherwise as ungrounded as the words in the
dictionary of a language that you don't understand. (No matter how many
of the words you look up, you never get to their meaning because you
don't understand a single word of what the dictionary is saying).

Kid-sib version: The meanings of symbols need to be grounded in the
robotic capacity to recognise and interact with the things in the world
that the symbols are about.

> I.E a zebra can be
> thought of as a horse with stripes, often what my young children called
> them. Yes I am all for these kid sib explanations! Harnad also suggests
> using a Hybrid model, combining symbolic and neural net techniques in
> order to compute.

If "horse" is grounded in the robotic capacity to recognise and name
horses and "stripes" is grounded in the robotic capacity to recognise
and name stripes, then "zebra" inherits that grounding:
"Zebra = Horse + Stripes" (This indirect grounding of new concepts
is the real power of language.)

> Thus, Searle implies that symbols can be used as he does not require
> understanding, whereas, Harnad would rather use the Hybrid model.

Searle and I are on the same side. Searle showed it couldn't all be
symbols, and I suggested what else might be needed.

A hybrid system would be a robot that had analog projections, neural
nets categorising them and symbols naming them.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:49 GMT