Re: Symbol Grounding Problem

From: Stevan Harnad (
Date: Sun Feb 04 1996 - 20:04:18 GMT

> From: "Masters, Kate" <>
> Date: Tue, 23 Jan 1996 16:32:18 +0000
> The Chinese Room Argument
> The Chinese Room Argument was proposed by John Searle (1980) as a
> method of disproving the claims made by symbolic AI. That is, to
> show that a system that can pass a T2 (Harnad) Turing test does not
> necessary mean that the system has intentionality or gains "intrinsic
> meaning" from the symbols it is processing.

What were the claims of AI (computationalism), and what were the
arguments and evidence supporting them? What is T2? And what is

> Searle's argument asked the reader to imagine Searle being locked in a
> room, from where only symbolic messages (Chinese symbols) could
> pass in and out. Searle had no knowledge of written or spoken
> Chinese. Searle is given input Chinese symbols and sets of English
> instructions. These instructions enable him to relate the symbols to
> each other and produce a set of output symbols.
> These action are intended to be an analogy between Searle and a
> computer.

Not an analogy; another implementation of the same programme, one that
was in a position to tell you whether you were right in attributing
Chinese-understanding to it on the strength of its T2 (penpal)

> In this case those who are passing the symbols into
> Searle's room and receiving the sets of symbols which he passes out
> are "the programmers".

No, they are letters to and from a Chinese penpal. The programmers
merely wrote the programme. They are long-gone now.

> The sets of Chinese symbols are, respectively,
> "a script", "a story" and "a question" . The English instructions are
> "the program".and the set of symbols that Searle produces in response
> to the third set of symbols and instructions in "the answer to the
> question". Searle goes on to argue that whilst he could develop
> skills which enabled him to produce "answers" which were interpretable
> as those of a native Chinese speaker the symbols he was manipulate,
> according to the rules he was given, would still have absolutely no
> meaning to him. He would still not understand Chinese.
> Searle's argument was against that of the computational approach put
> forward by Pylyshyn. The right symbol system, which
> performs "cognitively" can run on any hardware in order to perform
> this way: the implementation of this system is irrelevant. In other
> words thee system has "implementation independence".

That is not particularly Pylyshyn's contribution. Computation is
implementation-independent regardless of whether Pylyhshyn's
computationalism (Strong AI) to the effect that cognition is computation
is true.

> A T2 computer could pass the Turing test by enabling "penfriend" style
> messages which were indistinguishable from those one would expect from
> a real penfriend. A T3 computer is indistinguishable from a person at
> every level (except for brain function). The problem Searle would
> have here is in sensorimotor transduction. A T3 robot would have to
> use this in order to communicate signals throughout its body.
> Although Searle can act as the symbol manipulation part of the robot
> he cannot be the whole robot, hence he is only part of the system; the
> system as a whole may understand the symbols even though Searle does
> not. Sensorimotor transduction is not implementation independent.

Fine, but what does all this imply about the truth of computationalism,
and its alternatives?

> The Symbol Grounding Problem
> Before we can tackle the problem of how symbols are grounded within a
> system it is necessary to define both symbols systems and grounding.
> A symbol system is a set of physical tokens which are manipulated by
> other such tokens (rules).

No they are manipulated by physical operations which are based only on
the shapes of the symbols not their meaning. Examples are the Turing
Machine's basic operations: reading a symbol, writing a symbol, moving
the tape, halting.

> This manipulation is purely syntactic - as
> is demonstrated by the consideration of mathematical truths. The
> whole system; the symbols and the rules, is semantically
> interpretable. This semantic interpretation is the 'grounding' point.

Too quick. You have to slowly till kid-bro how a formal mathematical
system is a symbol system. Give examples. And the semantic
interpretation is not the "grounding point," since we can all
interpret, say arithmetic, but a desk calculator is not grounded.
Grounding has to do with direct connections between symbols and what
they refer to, as between a robot and things in the world.

> Grounding is when the connections between symbols and what they are
> about is direct, part of the system rather than performed by an
> outside interpreter. A linguistic proposition is intrinsically
> intentional to the proposer. If the proposer connects a symbol to
> that which the symbol is about the symbol is grounded. In order for
> a symbol system have an "understanding" the symbols have to represent
> something, therefore the system is parasitic upon those symbols which
> are grounded.

I'm afraid this is somewhat garbled: Yes, grounding is a direct causal
connection between symbols and what they are about, but not just any
connection. (I have emphasised sensorimotor categorisation capacity,
for example. Yes, a sentence is meaningful to a person who says it, but
that doesn't tell us what kind of thing is going on in his head. And if
the only connection between a symbol (say, in a book, or in a computer)
is the one that goes through the head of the interpreter, then the
symbol is UNgrounded, not grounded. Interpreter-mediated connection is
precisely what grounding is not! This, rather, is parasitism. Any
"representation" must be to mind of the system itself, not to the mind
of an external interpreter.

> If we were to try to learn Chinese from a Chinese/Chinese dictionary
> we would only ever be stuck in a merry-go-round-like cycle because
> each explanation would lead to another explanation all of which were
> in Chinese. Similarly, as Searle showed in his "Chinese Room", while
> it is possible to reproduce answers to Chinese questions identically
> to the answers a Chinese speaker would make it is not possible to gain
> meaning from this symbol manipulation. An English speaker cannot
> learn another language without grounding it in English. For example,
> we start by saying "Ah, so "Bonjour" means "Hello"" when we are
> learning French. If we were a young child learning French as a first
> language "bonjour" would be grounded in terms of an a phonetic action
> that is made at the time of seeing another person (particularly in the
> morning).
> Symbol grounding is a problem in two different directions: 1).
> Psychologists would like to know how cognition works and 2).
> Computation specialists would like to use symbolic AI in order to
> create an intelligent system that can convincingly mimic
> consciousness.

No, computationalists don't want to "mimic" consciousness, they want to
generate it: No tricks, no illusions. The real thing.

> The problem is that of "Where is the meaning of the symbols grounded?".

"How" rather than "where" I should think...

> Possible Solutions
> Harnad, S. (1990) purposes what he calls a "hybrid" solution to the
> symbol grounding problem. It relies upon symbols being grounded from
> the bottom up in two different kinds of representations. The first of
> these are iconic representations, which are analogues of the proximal
> sensory projections of distal images and events. The second are
> categorical representations, which pick out the invariant features of
> the iconic representations and use them in a process of discrimination
> and identification which leads to absolute judgments. These two
> forms of representation are both nonsymbolic, and neither of them
> gives meaning in itself. The process of categorisation "names"
> categories,. These names are hence symbols which enable the system to
> act upon the represented images and events and thus these images and
> events can be said, according to the earlier definition, to be
> grounded in the world.

Well, I'm not sure kid-bro could have figured out what that last
paragraph meant. It sounds as if a a device that can learn to categorise
by extracting sensory features is what is meant, so if such a device
could operate at T3 robotic scale, the system's internal symbols and
language would be grounded.

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:58 GMT