Re: Symbol Grounding Problem

From: Baden, Denise (DB193@psy.soton.ac.uk)
Date: Sun Feb 04 1996 - 18:34:50 GMT


Pylyshyn believes that cognition is computation i.e. that thought is
propositional. He denies the explanatory power and causal role of
images and claims that explanation can only occur when images are
cashed into the language of thought i.e. propositions. Pylyshyn
believes the sensory system is part of the hardware - precognition.
Only when it becomes computational is it cognition. The
computational level is the programme level - this is independent of
the details of the hardware and could be run on very different
systems i.e. it is implementation independent.

Searle however criticises the strong AI point of view which says
that the mind is a computer programme, that the brain is irrelevent
and that the turing test (i.e. a computer penpal that could pass as
a human from its responses) is decisive. Turing's point is that if a
machine passes the turing test , finding out that it is a machine
isn't grounds for denying it has a mind. Searle chinese room argument
gives a loophole into the other minds barrier. He agrees that when
the computer acts as a chinese penpal, one may not be able to tell
it from a human being. However, Searle claims he could do the same
without understanding, by memorising all the rules for manipulating
symbols from a chinese-chinese dictionary. Searle claims that he
could pass the turing test simply by symbol manipulation, whilst
having zero understanding. If that computer understands chinese it
is not because its running the right computer programme because
Searle is and he doesn't understand it. Therefore it is not
implementation independent.

Searle claimed from this that cognition wasn't computation at all.
He believed that our cognitive abilities are grounded completely and
necessarily in the brain. However, his chinese room argument has
only proved that cognition cannot be just computation. Pure symbol
manipulation cannot give rise to any intrinsic meaning. Like a book,
the meaning only emerges when it is interpreted by the reader.
Cognition thus requires us to get from meaningless symbols to what
the symbols are about.

This can occur in several ways. The most obvious is that symbols
must be grounded in a bottom up fashion by their direct reference to
objects in the real world. Features of the material world also
require some sort of weighting and categorizion. If every feature in
the enviroment was paid equal attention, then everything would be
unique, so the very act of, for example labelling something with 2
eyes, a nose and a mouth a `face' requires picking out the salient,
invarient features. Once symbols such as words have been grounded,
these can give rise to higher order symbols which do not have to be
grounded in the same way. For example, if the words hair and chin
are grounded, the word beard would have some intrinsic meaning to
the sysem by reference to those 2 words. Based on these criteria,
Searles chinese room argument would not apply to a robot that had a
sensori-motor transduction ability, as the symbols would then make
contact with what they were about.

The chinese room argument tries to draw a direct analogies between a
computer operating purely on symbol manipulation, with Searle giving
answers in Chinese, based purely on a chinese-chinese dictionary.
However, Searle still has a mind, and therefore would be making a
natural effort to make sense of these symbols he has had to analyze
to quite improbable lengths of complexity and depth. His
understanding therefore would very likely be qualitively different,
than that of a computer. Even if the symbols were grounded, the
analogy could be challenged. A robot, with no inbuilt fears or
desires would not see anything of relevance or meaning in the world
of objects. A cup, for example, may just be as meaningless a symbol,
albeit in 3D, as words written in a dictionary.



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:58 GMT