Re: MacDorman: Grounding Symbols through Sensorimotor Integration

From: Paramanantham, Daran (dp797@ecs.soton.ac.uk)
Date: Fri May 19 2000 - 09:55:25 BST


"Grounding Symbols through sensorimotor integration", Karl F. MacDorman
http://www.cogsci.soton.ac.uk/~harnad/Temp/CM302/macdorman.pdf

>Butterworth
>MacDorman begins the paper by discussing two types of robot systems (similar
>to the cognitivist and enactivist approaches in the Ziemke paper) and
>their strengths and limitations. The first is the pure symbol system,
>which suffers from the Symbol Grounding problem as we've already
>discussed. And more specifically (a kind of extended application of the
>grounding problem) says that not only the symbols need to be grounded, but
>also the rules for manipulating them, and the whole learning process.

>>MacDorman:
>>The basic problem with this arrangement is that symbol manipulation depends
>>solely on properties that are part of the system's internal workings: how
>>computer hardware implements the system's syntactic constraints. But the
>>sensorimotor relation between a robot's body and the external environment
>>must be able to influence the causal relation between its internal
>>symbols and the external state of affairs the represent.

>Butterworth
>He mentions that some attempts have been made to set up symbol-object
>connections in advance, but these are no good because they can never
>really cope with an unpredictable real world environment. This is another
>example of what Ziemke might have referred to as too much interference by
>the designer, but I agree with Harnad that it is the results (ie. simply
>if it passes T3 without cheating). It may be that some genius in the
>future will come up a minimal set of building-block
>symbols/objects/concepts which can be built upon to represent anything that
>intelligent real-world thought needs (ie. T3-passing), but we're not there yet.

If setting up symbol-object connections in advance, there is the possibilty of the
problem of expotential time arising. This will come about when the robot has to make
any decisions to reach a desired goal.
If this is the case system will not be feasible in the long run.

>MacDorman also introduces the term 'affordance'.

>> MacDorman:
>> [the robot]'s opportunities for interaction and the kinds of sensorimotor
>> invariance available to recognise them. J.J. Gibson (1979) called these
>> opportunities affordances.

>Butterworth
>It may be interesting to note that there is apparently some debate in the field
>of Psychology about Gibson's proposal, and whether the human brain actually
>recognises affordances before classifying an object (ie. automatically
>realising something as a place to sit before recognising it as a chair or
>ledge). But MacDorman simply uses the term to refer to the opportunity of
>taking some action, and what level of 'consciousness' performs the
>recognition of these affordances is not really relevant to MacDorman's
>robot.

Humans recognize some affordances unconsciously no matter what type of environment
they are in, such as walking, putting clothes on etc. If this property can be
achieved by a robot, it is getting closer to emulating the human brain
ie. in the sence of thinking the way humans can and making consciousness decisions.

>The second approach to artificial intelligence he mentions is what Brooks
>called the subsumption architecture (the equivalent to Ziemke's
>Enactivism)

>> MacDorman:
>> ...in which each processing layer constitutes a behaviour (e.g. wander, avoid
>> obstacles, track ball, shoot goal). Layers run in parallel with minimal
>> interaction. They enjoy a tight coupling with sensing and action by directly
>> using the robot's sensing of the environment as a point of reference instead
>> of a centralized representation. This makes for fast, reactive behaviour.

>Butterworth
>The problem he mentions with this is that these robots cannot adapt to changing
>affordances, and generally aren't competant enough. A simple graft of a symbol
>system on top does not help, because the symbol system is still only using
>internal syntactic constraints.

Having one centralised symbol system causes a bottelneck due to the flow of the
interactions between the internal workings and external workings, that results into delays.
The architecture which Brook proposed to deal with this, was to have them
in separate parallel layers. However this causes another problem to arise.
Each layer will not have direct overall knowledge of the other layers,
which will cause the robot not being able to adapt to changing affordances.

>The discussion on the Ziemke paper mentioned the need for centralised reasoning,
>and MacDorman also mentions the importance of being able to coordinate both
>thinking and action.

>> MacDorman:
>> To deal effectively with new situations a robot needs to model its
>> affordances so that it can test its actions against a model before testing
>> them against the world... A centralised representation may in fact form the
>> core of a robot's affordance model, serving as a global conceptualisation.

>Butterworth
>MacDorman reasonably mentions that this global conceptualisation does not have
>to be geographically central or one big lump...

As mention by Brook, if the global conceptualisation is situated in one central place,
a bottleneck can occur from the communications between actions and sensing.
However its been argued if a parallel sytem was composed, this will cause other problems
that we have seen.

>> MacDorman:
>> Simple units together constitute a global conceptualisation to the extent that
>> their separate interactions foster global coherence and integration among
>> separate bits of information.
>> ...
>> Implementing it with a traditional symbol system results in the frame problem.
>> This is not only because it creates a processing bottleneck, but because ...
>> the fact that they only have syntactic constraints means that they can
>> represent anything that is logically possible including a limitless number of
>> absurd concepts.

>Butterworth
>MacDorman then mentions two ways in which biological systems (and particularly
>humans) do not suffer from the frame problem. The first is that reasoning is
>empirically and functionally constrained, such that physically unreal
>possibilities are not even considered. And second is that we are able to
>automate and parallelise routines actions, so that they don't take up
>conscious thought. The example MacDorman gives is that of walking, which
>takes all a child's concentration when first learned, but quickly with
>practice becomes an automatic procedure to such an extent that we can
>talk, play games, kick a ball etc. at the same time.

>So the system which MacDorman wishes to propose needs to combine the
>advantages of the subsumptive architecture (which alone is like driving a
>car by instinct, but not being able to learn anything new) ie. having
>habitual parallel behaviours, with those of the global conceptualisation
>(which alone is like driving without ever having practiced driving
>before).

The system that MacDorman wishes to propose, basically can make unconscious
decisions, and free up space for conscious processing ie. dealing with the frame problem.
But for the robot to make any sort of decision being unconscious or not it has to go throgh the
number of possible goals that can make up that decision.
since this is the case will we ever have a robot that can learn fot itself
ie. from its environment, with out having the knowledge grounded into it.
This leads to it not having the ability to adapt to changes in affordances.

>MacDorman's system also needs some way of modelling affordances...

>> MacDorman:
>> An intelligent robot can discover the various interactions and effects that
>> its environment affords by learning spatiotemporal correlations in its
>> sensory projections, motor signals, and internal variables. These
>> correlations are a kind of embodied prediction about the future...
>> Currently active predictions constitute the robot's affordance model.

>Butterworth
>Then if a predication fails to produce the expected result, errors divert
>attention to the possible miscategorisation to aid the process of learning
>affordances.

>MacDorman then introduces his robot, which I will have to call Psi-ro for lack
>of greek symbols. The idea is that it trundles around an environment with
>other robots which are either 'food' or dangerous to it. It uses certain
>algorithms for identifying moving objects then decomposing the image into
>a signature in the form of a wavelet coefficient, which may be similar to
>processes in the brain. When it makes contact with an object it turns the
>sensorimotor feedback into a categorical representation (as discussed
>before Easter in Harnad's paper on the symbol grounding problem) by taking
>the invariant and identifying signatures from those it has been
>accumulating. Once it has learned some in this way Psi-ro can begin to
>predict affordances, only learning if it miscategorises. In a similar way,
>Psi-ro learns the effects of its movements on its environment by
>developing predictions based on past experience of how motor signals
>affect sensory projections. In this way, Psi-ro is learning empirical
>constraints. It can use these predictions to analyse chains of actions, so
>can plan ahead by searching through the phase space for the most efficient
>path to its goal.

>We can see then that Psi-ro does implement MacDorman's proposal of using a
>global conceptualisation, modelling future actions before implementing
>them.

Since psi-ro can predict affordances, it can therefore free up conscious space, and
make decisions far more quickly.
However the type of global conceptualisation system used will play a great factor
on the overall performance and decision making. If psi-ro has a centralized system,
as Brooks stated, a bottleneck of internal and external workings can occur.
This will have an affect on the overall performance of psi-ro, and so therfore it
having the ability to predict affordances, might not be shown in the performance
measures, as the bottleneck that may occur, may weigh down the responce of unconscious
decision making.

Daran Paramanantham



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:29 GMT