Re: Ziemke on "Rethinking Grounding"

From: Grady, James (jrg197@ecs.soton.ac.uk)
Date: Fri May 12 2000 - 10:48:14 BST


    [The following text is in the "iso-8859-1" character set]
    [Your display is set for the "US-ASCII" character set]
    [Some characters may be displayed incorrectly]

>SHAW:
>The explanation of cognitivism seems fairly straightforward - the
>processes of transduction (percepts -> internal representations), and
>cognition (manipulation of the internal representations) are distinct.
>(Interestingly, this still permits cognitive processes to be
>implementation independent).

To expand a little on Cognitivism:
Cognitivism relegates environmental transduction (the perception of
the surrounding) to peripheral functionality. The peripheral interface
is there simply to provide grounding to the central system.
The 'personality' of such a robot is all held in the central system.
Harnard proposed (1990) that from non-symbolic representations
taken from the interface can be derived iconic and categorical
representations. These grounded symbols provide the grounding
for what could be an entirely symbolic central system.

Cognitivism also includes the use of entirely non-symbolic
representation (Lakoff 1993). Sensory percepts are transduced
onto non-symbolic (typically connectionist) networks.

>SHAW:
>Enactivism, on the other hand, is based on the less intuitive concept of
>cognition as a function of 'embodied action'. This term refers to the
>belief that cognition is inseparably linked to processes of perception
>and action that are experienced through sensorimotor faculties.

To expand a little on Enactivism:
Enactivism believes cognition requires three things. Firstly that any
intelligent creature must have a body, such that it has individuality
and
functionality. Secondly this body is embedded in an environment,
a biological, psychological and cultural context. And thirdly the body
must be able to interact within this environment. In contrast
to the cognitivist robot's extrinsic 'central' personality the
enactivist
robot's has a number of parallel subsystems from which its behaviour
emerges.

>SHAW:
>Incidentally, although this is not really the point of the paper, the
>enactivist approach seems a little unnatural. Although the idea of an
>agent's behaviour 'evolving' as more components are added is sound, there
>is no central 'intelligence' that could think about, for example, which
>actions to take.

A bit like a zombie perhaps. A creature which portrays life-like
characteristics but which has no real intentions or reasons.

>>Ziemke
>> Accordingly, for the above labelling act to make sense to an agent, that
>> agent would have to be able to at least use its spatial labels in some
>> way, to profit in some way from developing the capacity to do so, etc.

This raises the issue of where is such a creature going to get any kind
of
intentionality or motivation from. Even if it has functionality why
should
it use it. What is to stop our creature being a couch potato? This
question challenges the validity of traditional cognitivist modelling
where it is assumed that^

>>Ziemke:
>>most cognitivist approaches follow the tradition of neglecting action
>>and attempt to ground internal representations in sensory invariants
>>alone.

Surely a creature must have a clearly grounded sensorimotor capacity.
Without which any model would struggle to react or be able to initiate
interaction. It can't just absorb information, it must interact.

>SHAW:
>The first point seems robust: an agent would certainly have to
>understand its actions so they would need to be 'intrinsic' to the
>system. The second point is less clear - the concept of a 'fully
>grounded system' hasn't really been justified and it isn't obvious why
>the organisation of the transducer shouldn't be determined by the
>designer.

Perhaps if you interpret 'intrinsic' to be slightly more than
understanding it throws more light on the meaning of the second
point. If the robots actions are intrinsic they could be said to be a
fundamentally inseparable part of the robot; coming naturally from
within not extrinsically imparted by a designer. Any 'separate'
(created externally) routines given to the robot from the designer
would be, by nature, a fundamentally separate part of the machine
so could not be described as intrinsic. It follows that entirely
intrinsic behaviour needs to be derived by the robot for itself from
the environment. The designer's job is not to create but to facilitate
creation.

It could be further argued that the designer has only imperfect
knowledge to offer and so removing them from the design gets
rid of the project's biggest source of imperfection.

>SHAW:
>In the section 'Grounding Complete Agents', mention is made of the fact
>that the only known intelligent systems are the result of millions of
>years' co-evolution between individual systems and their environment.
>In light of this, it is suggested that attempts to produce artificial
>agents should pay greater attention to factors like 'physiological
>grounding', of which one example is:

>>Ziemke
>> ... a perfect match/correspondence between the ultraviolet vision
>> of bees and the ultraviolet reflectance patterns of flowers.

It does seem that to create intelligent beings evolution is the most
natural way. Evolution used as a creative tool.

>SHAW:
>Again, this could be seen as taking the idea of minimising designer
>input too far. Although the sensory inputs of biological organisms are
>the result of evolution, it is hard to see how the presence of
>ultrasonic sensors on a robot would hinder its cognitive capacity.

Surely cognitive capacity develops with real world experience.
A robot would have to be in harmony with its environment to
fit in and develop. A ballet dancer robot would not fit into a
rugby team environment and so would be grossly hindered
in its development of cognitive capacity. Any
extrinsic physiology of a robot imparted on it from a designer
would lack the natural environmental harmony required
for development. Therefore it is hard to say at what point
you are minimising designer input too far.

Developing on the extrinsic/intrinsic debate: could a robot
with extrinsic mathematical ability substitute for one able to grasp
the concept of simple addition?



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT