Re: Symbol Grounding Problem

From: HARNAD Stevan (harnad@cogsci.soton.ac.uk)
Date: Wed May 15 1996 - 17:44:36 BST


> From: "Herheim Aaste" <AH595@psy.soton.ac.uk>
> Date: Wed, 15 May 1996 10:50:30 GMT
>
> I didn't quite get what the symbol grounding problem was,- could you
> explain it again?

A symbol system is a set of symbols (objects or patterns). These
symbols can be manipulated according to rules that operate purely on
the basis of their shapes, which are arbitrary, rather than on the
basis of what the shapes can be interpreted as meaning. For example,
"2 + 2 = X" is a string of symbols that can be manipulated to give
another string of symbols: "X = 4." And this can be interpreted as
meaning that two and two are four, but the manipulation did not make
use of that meaning; it could not make use of it. Symbol manipulation
is only syntactic, it is based only on symbol-shape.

Yet symbol manipulation has the remarkable property that it CAN be
interpreted as meaning something, even though it does not "contain" that
meaning, or use it in any way.

Now our thoughts can also be interpreted as meaning something: When you
think the thought that "two and two are four," you have something in
mind, namely, that two and two are four. If you were simply saying a
sentence in the Inuit language to yourself in your mind, and you didn't
understand Inuit, and you happened to be saying the Inuit sentence "two
and two are four," then you would NOT have that meaning in mind when
you were mentally producing that sentence, because you would not know
what you were saying. An Inuit speaker, however, if he could read your
mind, would know what you were saying.

This is the same with a book: A book has symbols. If you know the
language the book is written in, you can read it, and it will all make
sense to you: You will be able to interpret the symbols as meaning
something. But clearly that meaning would not be in the book: It would
be in your mind, in the mind of you, the reader, the interpreter. The
same would be true if instead of being a static symbol system in a
book, it were a dynamic symbol system in a computer, that changed states
and could answer questions -- say, a computerised encyclopedia: Again,
if the computer produced the symbols "two and two are four," that would
no more mean anything to the computer than they do to the book. They
only mean something to you, because you have a mind (whatever that is)
and something is going on in there (no one yet knows what), and what is
going on in there is what "meaning" is.

So could what is going on in there just be more symbols and symbol
manipulations? Let's consider the steps: The sentence "two and two are
four" in a book does not mean anything; its meaning simply comes from
the mind of anyone who reads, understands, and interprets the symbols in
the book. The same is true of dynamic symbols in a computer. Could the
same thing be true in your mind? Could the meaning of the symbols in
your mind reside, not in your mind , but in the mind of someone ELSE who
interprets your symbols? But then what is going on in HIS mind?

Does this remind you of the homunculus problem? It should. The
homunculus problem was a problem of infinite regress: How do I
recognise that this is the image of my grandmother? What is going on in
my mind is an image, which I recognise as being her. Who is looking at
that image in my mind's eye and recognising it's her? And what's going
on in HIS mind? and so on. This is the homuncular regress. The
processes just repeats over and over without ever explaining what's
going on in the mind except more homunculi, with still more unexplained
minds.

The symbol grounding problem is similar, but external: I, unlike the
book, MEAN "two and two are four" when I say it. But in what does my
meaning that consist? That YOU can interpret what I say or think as
meaning it? So my meaning is going on in YOUR mind, not mine? But hang
on; what about you? In what does the meaning in YOUR mind consist? That
yet another person can interpret it as meaning something? Again, we have
an infinite regress, so something is wrong.

The solution is different in the two cases: To get rid of the
homunculus, all you need to do is to supply an internal but mechanical
process that can process the image without itself requiring a mind, and
come with the right answer ("it's Grandma"). Kosslyn showed that analog
processing will do that job for you; probably symbol systems can do it
too; neural nets maybe do it the best.

But now let's just consider symbol systems: If a symbol system can stand
in for the homunculus, and process the image, and generate the thought
"it's Grandma," where is the MEANING of that thought? For we know that
there is no meaning in symbol systems; there's just syntax, mechanical
manipulation of symbols based on rules. How can a symbol system MEAN
"it's Grandma," any more than a book with that sentence in it can?

This is the symbol grounding problem: How do you ground the meanings of
symbols in something other than just more, meaningless symbols. The
example I gave you was trying to learn Chinese from a Chinese-Chinese
dictionary: All the symbols are there, and anyone who knows Chinese,
even a little Chinese to get started, can USE the dictionary to find
what the words mean. But the meaning will be in his mind, and the proof
is that YOU, who know no Chinese at all, can't even get started with
such a dictionary; you can't even "get off the ground."

So symbols need to be grounded in something more than just further
symbols and symbol manipulations, as in the Chinese/Chinese
Dictionary-Go-Round I showed in lecture. They have to be grounded in a
DIRECT connection between the symbols and the things they stand for.

One possibility is to use neural nets that find the (classical) features
of the categories that symbols name from the analog "shadows" cast by
the objects that the symbols stand for. A robot that could interact with
all the things in the world that its internal symbols refer to would
not be dependent on an external interpreter: It would be autonomous,
and its symbol meanings would be grounded in its robotic capacities,
as enacted in the world of things that the symbols are about. (Think of
the Turing Test.)

> I would like to read the "colourless green ideas sleep
> furiously"-poem as well...

Chomsky's original example of a sentence that is syntactically correct
but meaningless was: "Colourless green ideas sleep furiously."

John Hollender's evocative poem was:

        Coiled Alizarine

        Curiously deep
        the slumber of crimson thoughts,
        but, breathless,
        in stodgy viridian,
        colourless green ideas
        sleep furiously.

That's the whole of the poem. Here's some dictionary help, though:

alizarine: [from alizari, Levantine name for "madder"]
A peculiar yellowish-green colouring matter, formerly obtained
from madder, a plant much used in dying red.

viridian: "veronese green"



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:41 GMT