Re: Harnad (1) on Symbol Grounding Problem

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Wed Mar 22 2000 - 21:24:12 GMT


On Wed, 22 Mar 2000, Shaw, Leo wrote:

> >Harnad doesn't actually make clear here what he would consider a 'lifesize'
> >task.

T3

> This sounds like a good argument in favour of the 'hybrid system'
> - connectionist systems are criticised because they are not
> symbolic in nature, and symbol manipulation seems to be close to
> the way we think. But some of their properties are desirable,
> like the ability to learn to identify objects and extract
> features, and they can perform symbol manipulation by implementing
> symbol systems. Would it be fair to say that symbol manipulation
> is like another layer that runs on the connectionist 'hardware'?

Not if what you mean is that the neural net is merely the hardware
implementing the symbol system.

    See: Harnad, S. (1990) Symbols and Nets: Cooperation vs. Competition.
    Review of: S. Pinker and J. Mehler (Eds.) (1988)
    "Connections and Symbols." Connection Science 2: 257-260.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad88.symbols.nets.htm

Better to think of the neural net as a front-end to the symbol
system.

> I'd like to comment on one point about icons:
>
> HARNAD:
> > According to the model being proposed here, our ability to
> > discriminate inputs depends on our forming 'iconic
> > representations' of them (Harnad 1987b). These are internal analog
> > transforms of the projections of distal objects on our sensory
> > surfaces...
>
> Although I can see the justification for icons, I'm a bit confused
> about the requirement that they be 'formed' before we can make
> discriminations.

You are right. For an online discrimination, no internal representation
or memory is needed as long as the two inputs you need to discriminate
are simultaneous; if they are successive, you need a memory trace of
the first to compare to, though, and that would be the icon.

> baby chimps have an innate fear of snakes - even without past experience

Correct, but that is not discrimination (which is merely a pairwise
comparison). The innate fear of snakes is probably based on inborn
rather than learned feature-detectors (not icons either).

> This would seem to imply that the
> brain is already capable of discriminating shapes, and has
> pre-defined behavior towards some types.

Yes, but that is not discrimination. It is what we have been calling
identification, or categorization.

> Incidentally, the
> paragraph mentions 'projections of distal objects on our sensory
> surfaces' and says that 'For identification, icons must be
> selectively reduced to those 'invariant features' of the sensory
> projection that will reliably distinguish a member of a
> category...' - is this something to do with the repeated
> 'retina-like' structures that are found in the brain?

It is not yet fully understood what is the function of the multiple
analog copies of the retina. Being analog, they may be the place where
icons occur; but some have thought they do feature-detection too.

> This paper seems concerned largely with
> 'learning' - obviously with the emphasis on symbol grounding. As
> a hypothetical question, suppose at some point in the future we
> are able to scan a brain at an atomic level and produce a
> simulation on a computer.

We are back to simulation. Not all those brain details may be relevant
or necessary, but let's go on:

> The computer (or program) is advanced
> in that it is able to accurately model the behavior of atoms but
> apart from that it is similar to today's machines.

So again we are dealing with a predictor, as came up many times in
class. Such a system could predict what it takes to pass T3, but it
cannot pass T3 unless it is a part of a T3 robot.

> Surely if such
> a computer could exist... the
> modelled brain would perform exactly like the real one - we could
> even artificially stimulate the sensory regions to simulate
> sensory input.

Exactly the same way we could model an airplane computationally: It
wouldn't be a plane, and it wouldn't fly.

> In this case, the issue would not be whether a
> computer could pass the turing test and be deemed capable of
> thought, but how we could arrive at such a 'brain' without copying
> an existing one - the process of 'learning'.

No you lost me. If brain scanning happens to be the way you construct a
brain simulation, it would just be like a 3-D video of the brain scan.
To WORK (rather than just mimic the X-ray), the model has to be able to
DO what the brain can do. If sticking the model in as a component of a
robot gets the robot to pass T3, then, bravo, you have passed T3. But
if you did it just by "videoing" the brain (I doubt that would do the
trick, but imagining it did) then the only problem would be that you
still don't know how it works (even though it does work)!

So it would be like CLONING a brain that passes T3: Nice trick, but
doesn't advance our reverse-engineering UNDERSTANDING of how to pass
T3; just a (miraculous) forward-engineering trick for creating a T3
robot without having to know how it works.

> This has some
> relation to the question of whether, if all the sensory inputs
> were removed from a 'thinking' T3 machine, would it still 'think'.

Not quite. A simulation is still just a simulation. If the
brain-video-digital-simulation, when "inserted" into a dumb robot,
miraculously made it into a T3 robot, you would have cloned a mind,
that's all. The brain would still be more than just that brain-scan
module (.e.g, there would still be all the real sensorimotor
transduction to do, exactly as in the real brain). Run the idea through
on a "plane-scan" instead of a brain-scan, and see what it gets you...

> The answer must be yes, but the question is whether the T3 level
> is required to form a consciousness.

T3 is as important for consciousness as a real plane and real air is
for flying.

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT