Re: Harnad (1) on Symbol Grounding Problem

From: Grady, James (jrg197@ecs.soton.ac.uk)
Date: Fri Mar 24 2000 - 11:22:47 GMT


> Brown:
> The fact that no one has been able to develop a non "toy" implementation of
> these models suggests that they are incapable of providing anything other
> than toys, despite Chalmers attempts at showing computational sufficiency
> for cognition.

Current ability to develop little more than toys doesn't seem to be
very important. It is important not to write off the fact one might be
sufficient just because we can't say anything conclusively. It is only
a guess that we might need both. A goldfish seems to me to be a
creature which performs very little computation. Perhaps it may be
possible that we could do everything a goldfish does using only
connectionism. Could we then scale up to a very clever fish and
eventually to a human.

It seems to me a little fishy that the brain follows some explicit
algorithm (Symbolic Manipulation) and rather an implicit one (Neural
Nets). With this in mind I will float the idea that symbolic
manipulation is a fool's gold T3. It seems a little premature to commit
ourselves to thinking hybrid system will do the job.

> Brown
> Here we see what I feel is a very good argument against connectionism. Many
> of the things we do are symbolic, therefore a model of the mind should also
> be symbolic. Harnad argues that this may be a reason for the limited
> successes of neural nets. Rather than propose that only symbol systems be
> used instead, Harnad introduces the symbol grounding problem (TSGP), which
> may, in turn explain the toy like results that are achieved with symbolic
> AI.

Just because many of the things we do appear to be symbolic don't
forget that a lot aren't. A symbolic system must 'rulefully combine'
symbol tokens and also be 'semantically interpretable'. We have already
established that this will be incomplete as we are inconsistent, are
often irrational, illogical and make mistakes. This to me sounds more
like implicit computation. (It also sounds a lot like my neural network
I programmed for Bob Damper AI 1!)

> Brown
> My response to this would be to plug in some eyes, let the system see, but
> Harnad argues against this in the next section. The Chinese/Chinese
> dictionary argument also raises another question, perhaps not directly
> related to this, what is the language of thought?

Is the language of thought important since all languages are equivalent.
(Any squiggle or squoggle will do).

> Brown
> okay, so we have to generalise to identify, icons tell us that two horses
> are different, but the reason we know they are horses is not because they
> match some internal icon of a horse but because the horse has certain
> features such as four legs and yellow teeth, that make it a horse.

It might be good to mention that there are two different ways of being
able to categorize a horse as a horse. We either found it out the hard
way or we were given the information by others. For example if you my
aunt if she was a horse (with her walking sticks she has 4 legs and
yellow teeth) she would painfully communicate to you that she was not.
On the other hand if you know what a farm was, knew what 'was ridden'
meant and knew want a polo mint was you could soon categorize a horse.
As Brown goes onto explain

> Brown
> So, in essence, we can use inheritance to ground symbols. If someone is
> able to identify a horse(1), and identify stripes(2), and is then told that
> the symbol "Zebra" is a stripey horse, then they can recognise one, without
> ever having seen it.

So interestingly we can use known symbols to ground unseen symbols.
Which raises the question we looked at in class. What is the minimum
number of grounded symbols we need to be able to understand a language.
Following on from this there must also be a ceiling where having more
grounded symbols doesn't help.

> To sum up, there have been two main approaches to AI. Both have their
> advocates and their detractors, but the fact remains neither of them has yet
> achieved their long term goal. Harnad here attempts to describe why they
> have failed, and to draw the two methods together in a foundation for the
> eventual creation of an AI.

To conclude I would like to mention the mushroom game from class
yesterday. It was proposed that there are 2 species of mushroom
eaters. One has the ability to learn to classify mushrooms into edible
and inedible, the 'hard way' by eating and learning which ones are
nourishing and which are poisonous. The other knows that if they hear
one of the other species eating a mushroom the mushrooms around it are
going to be edible. We will call the first species the 'toilers' and
the other the 'thieves'. The problem is that in this ecosystem their
will be periodic success of one species after the other as one species
becomes too successful and starts to starve or is eaten by the other.

Supposedly the toilers represent the categorization of knowledge the
hard way and the other the theft of knowledge rather similar to person
being informed that a zebra was a horse with stripes.

I propose a couple of ways to control this environment to bring it to
an equilibrium. How valid they are is another question

1. The thieves are only allowed to hunt alone. This would cure the
problem of having packs of thieves following a toiler.

2. Introducing a third species called 'God's Wrath' or the 'predators'
to be more PC. This species is sent as divine judgment to feed on the
thieves and thus curbs their success. This would leave us with quite a
natural looking food chain. Possibly the best solution.

3. Allow the thieves to suppress the toilers and bread them
artificially. The thieves would then be responsible for keeping the
equilibrium.

Grady, James <jrg197@ecs.soton.ac.uk>



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT