Re: Davidson on Symbol Grounding

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Sun May 21 2000 - 18:16:12 BST


On Thu, 18 May 2000, Brooking, Stephen wrote:

> > Pentland:
> > Has anyone any ideas about approaching this the opposite way? Give
> > grounding to symbols using a lower level language. By doing this will the
> > regression lead to insugnificantly small and pointless symbols so as not
> > to matter?
>
> Brooking:
> Davidsson did not say a "higher level" language, but a "more powerful"
> language. I don't think the two are necessarily connected.
> Pentland mentions using a lower level language to describe symbols, but is
> there a lower level than symbols? I don't think so.

I would agree with Brooking.

Smolensky (1988) suggested that there was a "subsymbolic" "level" -- but
I don't think he managed to make the distinction coherent. (He was
thinking of something like bottom-up grounding with neural nets, but
what he calls SUBsymbolic should probably be called NONsymbolic.

    Smolensky, Paul. (1988) On the proper treatment of connectionism.
    Behavioral & Brain Sciences 11(1): 1-74.
 
> > Pentland:
> > Ground symbols in relation to each other, although this will lead to an
> > infinite loop of referals in the search for a grounding.
>
> Brooking:
> Indeed, this will lead to a recursive grounding of symbols on symbols on
> symbols... One symbol may be grounded on another, but somewhere in the
> chain, there must be a symbol grounded on something that doesn't
> need grounding.

Right. That's the symbol grounding problem. But what you mean is that
there must be some symbols that are grounded in something other than
just more symbols. One possibility if that they are grounded in neural
nets that can pick out the object the symbol stands for, from its
sensorimotor projection to/from a robot.

> Brooking:
> But what doesn't need grounding? How do humans perform symbol
> grounding? Our eyes and brain see and interpret what we see around us;
> we see a computer and it means something to us. How can this be
> implemented in a robot?

Good question...

> Brooking:
> So the robot must be able to add instances of a category at any time.
> Taking the chair example - the robot will not know what all chairs look
> like, but will maybe have a few examples. Every time a new example of that
> category is encountered, it must be able to add it to those already stored.

Does it need to store instances, or just the capacity to handle them,
if/when encountered again?

> > Pentland:
> > The argument of vision being neccesary and thusly symbolic representations
> > will not be needed I find hard to understand. If thier are no symbols
> > then what does the system malipulate in order to perform any function.
> >
> > As DAVIDSON states, if this is the case the symbol grounding problem is
> > irrelavent and he goes on the say that no matter what the system uses it
> > still MUST have some sort of concepts.
>
> Brooking:
> A form of the symbol grounding problem will always be apparent. Vision
> shouldn't be necessary, but certainly some sensory interaction with the
> world is required. Humans born blind are still able to learn. Even without
> sight and hearing, humans have tactile sensory interaction with the world
> and this is an important sense to have. Babies and young children like to
> touch and play with things, and I believe this is an important part of
> learning. As humans learn, by interpretation of the world through any
> senses available, that interpretation is the manipulation of symbols,
> presented to the system, by the sensors.

Sensorimotor interaction is not optional for a robot, but symbols are;
so if there are no symbols, there is no symbol grounding problem. But
can a robot pass T3 without any symbols (computation) at all?

> Brooking:
> Indeed, to get around the symbol grounding problem, the robot cannot be
> told about the world by a supervisor, but has to make inferences for
> itself.

Can't the world itself be the robot's supervisor, making him sick when
he eats the wrong kind of mushroom, and healthy when he eats the right
kind? (This is what Skinner calls "feedback from [the] consequences" [of
actions].

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:29 GMT