Re: Harnad: The Symbol Grounding Problem

From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Sun Apr 01 2001 - 13:34:23 BST


On Fri, 2 Mar 2001, Watfa Nadine wrote:

> http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html
>
>
> Watfa:
> The mind cannot solely be a symbol system, and cognition, symbol
> manipulation. If the mind was just a symbol system we
> would only be able to do things such as calculation,
> reasoning and problem solving, but we would be unable to do
> sensorimotor activities, learn, or even make mistakes. To
> humans, symbols are not arbitrary objects, symbols have
> meanings, and humans operate on those meanings.

But what evidence do you have that learning is not symbol manipulation
too? There are learning algorithms, that take input data and learn, from
repeated experience, to produce the right output. And surely algorithms
can go wrong, so there's plenty of room for mistakes. Remember the
Granny Objections?
http://www.cogsci.soton.ac.uk/~harnad/CM302/Granny/sld005.htm

And although sensorimotor transduction itself cannot be symbolic,
couldn't all the mechanisms that guide and govern it be?

By all means object, criticize, and disagree, but always give the reasons
and evidence supporting your objections, criticisms and disagreements.

> Watfa:
> Symbol systems are one way of modelling the mind. Another
> method discussed here is connectionism, neural nets which
> are parallel distributed systems that change their
> interconnections with experience. Such systems have turned
> out to have powerful learning capacities.

Yes, but they have also been simulated on digital, serial systems. Is
there any ESSENTIAL difference between real parallel/distributed nets
and fast serial simulations of them? For if not, then it is not clear
why the fast serial symbol system couldn't perform the functions of the
net just as well.

> Watfa:
> What is a lifesize chunk of behaviour? What are the other
> possible implementations of such a dynamical system?

They are either T2 or T3 or some piece of human performance capacity that
we know is autonomous, that is, we know it can be done by a "module"
that does not draw on any other capacities.

(I know of no such autonomous modules. Chomsky proposed grammatical
capacity, but that has since turned out to depend on meaning and
perhaps logic too. Chess playing certainly isn't autonomous in our
brains. [Note that showing that something can be done by a computer
programme does not imply either that it is autonomous in our brains, or
that that is the way it is done by our brains.] Even language is not
autonomous.)

So, does anyone have a candidate for a lifesize, autonomous capacity,
apart from T2 or T3?

> Watfa:
> What are... "toy" tasks? And what is a lifesize
> behavioural capacity?

See above.

> Watfa:
> If our linguistic capacities and other skills are all
> symbolic then why use connectionism to model cognitive
> capacities? Why not just use symbol systems? The answer
> is that symbol systems are not enough to model our human
> scale capacities.

Fine. But why not, exactly?

> Watfa:
> the robotic Turing Test [cannot] be passed by just a
> symbol system (computation alone). It needs a hybrid
> symbolic/sensorimotor system: combining both symbol systems
> and connectionism.

And what is the answer to someone who says: "Sensing and moving
themselves are not symbolic, but everything else is. So it's really just
a computer, computing, and using sensors/effectors as I/O devices."

> Watfa:
> There are an infinite number of languages, but only a
> finite number of ways to define them (i.e. symbol shapes
> meaning different things to different people).

Not clear what you are trying to define: words in languages? The problem
is not whether there is a finite or and infinite number of languages
(surely there can only be a finite number of actually spoken languages,
and only an infinity of POTENTIAL languages); nor is the problem that
there is a finite or infinite number of (potential) definitions for any
word. The problem is that the words in a language are just meaningless
symbols. And definitions are just strings of meaningless symbols too. How
do you get meaning out of that?

> Watfa:
> An example
> of this is if English was your first language, and assuming
> you know everything there is to know about the English
> language, you're trying to learn Chinese. "Word loops" can
> be formed from a starting symbol, following the meanings
> through the dictionary back to the original symbol. An
> appropriate property (i.e. meaning) can then be assigned to
> a specific "word loop" using prior first language
> knowledge. Given one symbol from the word loop, it can be
> identified from the applied meaning. In this way, one has
> redefined an established language. (As cryptologists of
> ancient languages have managed to do).

Kid-sib can't follow this: If you look up a definition of a word in a
dictionary for a language you don't know at all, all definitions will
be meaningless to you. As you keep looking up the words in the
definitions of the words in the definitions of the words... etc., two
things could happen: You could either eventually loop back to words you
have already looked up, now turning up as parts of the definitions of
the words that had appeared in the definitions of those original words,
and thereby keep looping through the same subset of the dictionary over
and over (never finding anything but the same, meaningless symbols over
and over); or your definition search could diverge and not start looping
till it has gone through most or all the words in the dictionary.

Of course, if you already know the meaning of some of the symbols (how
many do you need?), for example, because you know their translations in
a language you do know, then your search is grounded. (Cryptologists
decipher unknown languages, eventually translating them into known
languages.)

> Watfa:
> How come the symbols in our mind
> mean something? It has to be because some of those symbols
> are connected to the things they stand for by the
> sensorimotor mechanisms that detect and recognise those
> things. Then a dictionary is built in our minds from the
> grounded basic vocabulary, by combining and re-combining
> the symbols into higher-order categories.

Correct.

> Watfa:
> Just by the grounding of "horse & stripes" will not
> automatically give us zebra. You have not been told
> anything else (i.e. where the stripes are, are they on the
> horse, where on the horse, what colour are the stripes).

True. But this was just simplified. If a "Bebra" was simply a black
horse (black all over!), the example would be the same.

> Watfa:
> Say I don't know what a zebra looks like, and armed with
> the grounding inherited by the zebra, I see a Cheshire
> horse. I have no prior knowledge of what it is, but I know
> it looks like a horse, and it's got a stripy (blond and
> brown) tail, and stripy legs (again blond just above its
> hooves, and the rest brown). Well, to me I see "horse &
> stripes" so it must be a "zebra"! There needs to be a bit
> more information grounded in here (possibly with
> connectionism).

No, just a better definition, e.g., black and white stripes, in parallel
arcs...

(The connectionism would come in if you were learning what a zebra was
from trial and error experience and feedback; but here we were talking
about learning it from a grounded definition.)

> Watfa:
> This leads us on to the Credit/Blame assignment problem.
> I'm told I'm wrong but I don't know why I'm wrong - it's a
> horse with stripes isn't it? I've followed the right rules
> and features, which do I need to change? This is the
> "blame assignment problem". With only a few features this
> is not so hard a problem, but where there exists a huge
> number of rules and features this problem will be very
> difficult. This is the problem faced by any system that
> hopes to scale up to the human-scale learning capacity.
> And this is the major problem of trying to do AI with
> symbols only. Learning is mostly sensorimotor and
> nonsymbolic, and that's why it's necessary to have
> connectionism.

This is not correct, I'm afraid. The credit/blame assignment problem is
equally a problem for a learning algorithm and for a neural net. It is
not peculiar to symbol systems. And the zebra example was meant to be a
simplified instance of successful one-trial learning from a grounded
symbolic definition that IS sufficient to base successful performance
on. You have changed the problem into one in which the definition is NOT
sufficient. An interesting variant, but not relevant to the point at
hand.

> Watfa:
> If the credit/blame assignment problem is a variant of
> the frame problem, and the frame problem a symptom of the
> symbol grounding problem, could the credit/blame assignment
> then be related to the symbol grounding problem as I have
> just illustrated?

Yes, in a hybrid learning system, insufficient definitions would have to
be supplemented by trial--and-error sensorimotor learning, guided by
error-correcting feedback.

But the point here was about the power of a grounded definition that IS
sufficient to guide you without needing to be supplemented with
sensorimotor learning.

> Watfa:
> Combining both connectionism and symbol systems results in
> a hybrid system that should be able to pass the Turing
> Test, and answer AI's "How?" question: "what is it that
> makes a system able to do the kinds of things normal people
> can do?"

Well, they're a possible way to design a successful candidate, but
symbols + nets could also prove to be insufficient...

Stevan Harnad



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST