Re: Harnad (1) on Symbol Grounding Problem

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Sun Mar 19 2000 - 19:42:41 GMT


On Sat, 18 Mar 2000, Butterworth, Penny wrote:

> I think that I agree with this view intuitively, as well as because of
> the logical reasons given here. The more 'natural' process of training
> a network, and the evolution of weightings which can even produce
> time-dependent outputs where required, seems fundamentally different to
> formal symbol systems. The question really comes up because neural nets
> are often simulated by symbol systems (ie. your average digital
> computer) instead of being fully implemented, but Harnad explains this
> quite well in his footnote...

It is important to remember that AI can do learning too, and that
unless a neural net makes some sort of ESSENTIAL use of the the fact
that its implementation is parallel and distributed (P/D), then neural
nets might as well be just computational learning algorithms. Again,
the only ESSENTIALLY noncomputational part is the sensorimotor
trandsucer surface. Once we get past the sensorimotor interface, all
the nets could in principle do their work as serially simulated nets,
rather than real P/D.

Of course, the digital serial implementation could take too long, or
require to much storage and processing capacity, but that is another
matter. If its parallelism and distributedness (and general analog
character) is somehow ESSENTIAL to the work it is aboe to do, that has
to be shown; otherwise, its implementation is irrelevant.

This makes no difference for symbol grounding, because even if the nets
are symbolic, the system is hybrid because of the necessarily analog
sensorimotor surface. Can anyone think of other possible reasons (other
than speed and memory) for other ESSENTIAL noncomputational
components?

> Harnad then introduces a motivation for his presentation of the symbol
> grounding problem, namely that although connectionism has been
> criticised for not being symbolic and so not representative of the more
> logical aspects of the mind, yet symbolism is not without its own
> problems.

One problem being the symbol grounding problem -- but the limited
success of symbol systems in scaling up to T2/T3, after an initial
promising start, is another big problem.

In hindsight, do you think the Goedel/provability problem was really
one of the problems of symbol systems that prevented them from scaling
up?

> As an aside, it may or may not be worth mentioning that I'm not
> actually convinced of the efficacy of Searle's argument. I'm not saying
> that I think the 'program' or 'computer' described does have a mind or
> even an understanding of Chinese (though it's possible that depends on
> your understanding of 'an understanding'!),

Let me interrupt here and remind you that if you want "understanding" to
mean anything more than "its symbols are interpretable BY ME as
understanding [the same way a virtual heart is intrepretable by me as
pumping -- while in reality both the understanding program and the
herat program are really just squiggling] then you owe all of us an
explanation of exactly what this mindless understanding would be!

I freely admit that it's only the MENTAL understanding that
distinguishes real understanding from merely interpretable squiggling.
I can't think of an intermediate case (and more than I can think of an
intermediate case between mere optical transduction -- as in a
photosensitive bank door -- and sseing).

> in fact I would tend
> towards saying it didn't. I simply don't think that Searle makes a
> particularly relevant proof (and no, I can't think of a better).

Important to know that Searle's is not a PROOF. We can only prove
things in mathematics. In science we have evidence. But this isn't evn
that, because the other-minds probem makes it impossible to get direct
evidence about whether or not anyone/anything else understands.

So, neither proof nor evidence, Searle's Argument is simply appealing
to what you already know (and have no reason to think is incorrect):
If he were to squiggle in that way, he would not understand (mentally),
and it is so far-fetched as to border on the absurd to think that
anyone ELSE in his head would be understanding (mentally understanding,
remember) either.

Then, by implementation-independence, the same thing has to be true of
the comuter implementing the same program.

> I'll
> try to explain myself - when Searle is looking at the Chinese symbols
> and generating his replies, he is presumably using (a) different
> part(s) of his brain than if he had been reading and writing English
> (ie. step-by-step following of memorised rules, rather than the more
> sublimated absorption of recognised language). Therefore it cannot be
> expected that Searle himself will have any understanding of the Chinese
> (which is presumably the conclusion Searle expects us to draw).

Yes,; but the question is: If Searle isn't, IS ANYONE ELSE
understanding? If so, who? And if not, then there is no understanding
going on at all, just squiggling (which is interpretable by us as
pen-pal interactions that make systematic -- indeed lifelong -- sense).
And the same goes for when it's the computer that's implementing the
very same program.

> However, to me this implies that Searle is asking the wrong question -
> it is not the language recognition part of his brain which is analysing
> the Chinese, so his 'mind' (whether we are talking about another
> physical part of the brain or some pseudo-physical amalgamation greater
> than the brain) does not understand it, but why does that necessarily
> mean that the language recognition part of the computer/program does
> not generate some kind of understanding 'elsewhere'?

Because understanding (like seeing) is not something that can be done
mindlessly. Would you draw the same conclusion if Searle were doing the
same number that he has just done on T2, but di it instead on T3 (which is
something he cannot do, because transduction, unlike computation, is
not implementation-independent, so Searle can't implement it and BE the
transducer)?

Let's pretend he COULD do it with T3: Pretend there's a T3 robot -- one
of the people in our class turns out to be a robot -- and Searle wants
to "prove" [i.e., pump our intuitions] that he can't really SEE, he
just BEHAVES as if he sees, but in fact there is no seeing going on in
there. So Searle "becomes" that T3 system (don't ask me how, because
only computation is implementation-independent, so he really couldn't
do it), and he implements the transduction with a bunch of photocells
aimed at an image (one that Searle cannot himself see). So he reads off
the values of each of the array of photocells (note, he doesn't know
how they work, or what the values mean, he just fullows the rules) and
eventually he finishes the procedures and now goes on to the next step
which is to manipulate a bunch of devices on the basis of the results
-- this time they are motor actuators, which, once he is done, lcomote
away (without his knowledge) from the image that the photosensors
registered, because it was a sabre-tooth tiger.

So Searle saw nothing, and doesn't know what happened, but he went
through the motions of the robot. Was there ANY seeing going on there?
Or just the processing of photons, as in a phtocell?

(Note that this is not a Searle Argument against T3, because there is
still the System Reply: Searle is not being the whole system, so there
is still the logical possibility that seeing is soing on here, but it
is the whole system that is seeing. But am I right that you would be
happier to say there was no seeing here, just optical processing? And
if it was with sound, there would be no hearing, just the processing of
acoustic waves; and same for touch? All because seeing, hearing and
feeling are MENTAL states, and the rest is just causal, physical
reactions, like a ball rolling down a hill or hail falling on an icy
lake" Nobody home!)

This was a risky analogy to use with people who are new to Searle,
because it is in fact a FALSE argument against T3, just as Searle's
argument would have been false if he had left it at the stage where the
symbols and rules were all on the blackboard; for then the "System
Reply" would have been correct there too: "Maybe Searle doesn't
understand but the System as a whole does." Searle handled this by
becoming the whole system. He can't do that with T3. So the analogy
above is NOT an argument against T3. It's just pumping your intuitions
for the difference between a system that is intrepretable (by us) as
seeing, but just doing optical transduction, and a system (like us)
that is really seeing. By exactly the same token, there is a difference
between a system that is intrepretable (by us) as understanding, but
just doing symbol manipulation, and a system (like us) that is really
understanding. The difference in both cases concerns whether or not
there's somebody home!

> Harnad argues that to be able to discriminate and identify objects and
> classes, we need an internal representation, and proposes 'iconic
> representations', analog transforms of received sensory input. It would
> then be a relatively simple process to measure two icons degree of
> similarity. However, as Harnad discusses, icons are not sufficient for
> identification as there would simply be too many of them, and their
> distinctions too vague. This is similar to the fictional character we
> were discussing on Thursday, who remembered everything so was unable to
> classify objects because he remembered them all (and all instances of
> each) as distinct. Instead, Harnad says, the unimportant must be
> forgotten...

That's all correct, and the connection with Funes is too.

> OK, so we now have these icon thingies, such that if the system sees a
> horse, it can say 'Horse!', but very little else - we have a
> classifier.

As this is T3, you have to suppose that it can do a lot else: It
doesn't just make the unique noise "horse!" whenever it sees a horse,
it also, being a robot, can DO all the things we can do with horses
(walk arond them, groom them, ride them, avoid being trampled by them);
and can do it not just visually, but via the other senses as well. In
short, horses (as a kind of input) are grounded for all robotic
interactions with horses, of which naming them is simply the most
abstract kind.

It is on this sort of sensorimotor grounding that the second kind of
representation, which merely combines the symbols that are the names of
the hrounded categories, depends.

> And as Harnad discusses a little later, a very good
> candidate for classifiers is connectionism, ie. neural nets. I don't
> know how many of the class did Neural Nets last semester, but this is a
> classic application for them - apply a feature map as input, and the
> net generates an output indicating one particular class. Very nice,
> but we now want to do something with our 'Horse!'...

Right. Now remember that Nets had their problems too, just as Symbols
did, and one of the problems of nets was that they do not have
"compositionality" -- you cannot combine and recombine their states
into the strings of systematically interpretable propositions that you
can generate by combining and recombining symbols. A neural net in a
"horse-detecting" state is just that: It's detecting one state of
affairs, a horse.

But if you then take the (grounded) symbol that NAMES the kind of thing
that that state is detecting, "horse," and combine it with another
(grounded) symbol (say, "striped"), then you may inherit a new kind of
thing, a "zebra," which you can no detect the very first time you see
one (and you can already use it to symbolize further things you've
never seen) -- all because the sensorimtor grounding is inherited by
the combinatory symbolic representations. And there is no
Chinese-Dictionary Problem (in fact, you could WRITE a dictionary, the
way Johnson did, out of the grounded symbols you already have; and you
could learn any symbols you don't yet know from grounded definitions).

> Harnad gives the example that, given awareness of the icons for 'horse'
> and 'stripes', it should be a simple matter to define a new concept
> 'zebra' as the conjunction of these two, and this concept would
> 'inherit' a grounding from them.

Conjection, or any boolean combination -- anything you could say in
words (symbols), as a matter of fact.

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT