"Grounding Symbols through sensorimotor integration", Karl F. MacDorman
http://www.cogsci.soton.ac.uk/~harnad/Temp/CM302/macdorman.pdf
MacDorman begins the paper by discussing two types of robot systems (similar
to the cognitivist and enactivist approaches in the Ziemke paper) and
their strengths and limitations. The first is the pure symbol system,
which suffers from the Symbol Grounding problem as we've already
discussed. And more specifically (a kind of extended application of the
grounding problem) says that not only the symbols need to be grounded, but
also the rules for manipulating them, and the whole learning process.
> MacDorman:
> The basic problem with this arrangement is that symbol manipulation depends
> solely on properties that are part of the system's internal workings: how
> computer hardware implements the system's syntactic constraints. But the
> sensorimotor relation between a robot's body and the external environment
> must be able to influence the causal relation between its internal
> symbols and the external state of affairs the represent.
He mentions that some attempts have been made to set up symbol-object
connections in advance, but these are no good because they can never
really cope with an unpredictable real world environment. This is another
example of what Ziemke might have referred to as too much interference by
the designer, but I agree with Harnad that it is the results (ie. simply
if it passes T3 without cheating). It may be that some genius in the
future will come up a minimal set of building-block
symbols/objects/concepts which can be built upon to represent anything that
intelligent real-world thought needs (ie. T3-passing), but we're not there yet.
MacDorman also introduces the term 'affordance'.
> MacDorman:
> [the robot]'s opportunities for interaction and the kinds of sensorimotor
> invariance available to recognise them. J.J. Gibson (1979) called these
> opportunities affordances.
It may be interesting to note that there is apparently some debate in the field
of Psychology about Gibson's proposal, and whether the human brain actually
recognises affordances before classifying an object (ie. automatically
realising something as a place to sit before recognising it as a chair or
ledge). But MacDorman simply uses the term to refer to the opportunity of
taking some action, and what level of 'consciousness' performs the
recognition of these affordances is not really relevant to MacDorman's
robot.
The second approach to artificial intelligence he mentions is what Brooks
called the subsumption architecture (the equivalent to Ziemke's
Enactivism)
> MacDorman:
> ...in which each processing layer constitutes a behaviour (e.g. wander, avoid
> obstacles, track ball, shoot goal). Layers run in parallel with minimal
> interaction. They enjoy a tight coupling with sensing and action by directly
> using the robot's sensing of the environment as a point of reference instead
> of a centralized representation. This makes for fast, reactive behaviour.
The problem he mentions with this is that these robots cannot adapt to changing
affordances, and generally aren't competant enough. A simple graft of a symbol
system on top does not help, because the symbol system is still only using
internal syntactic constraints.
The discussion on the Ziemke paper mentioned the need for centralised reasoning,
and MacDorman also mentions the importance of being able to coordinate both
thinking and action.
> MacDorman:
> To deal effectively with new situations a robot needs to model its
> affordances so that it can test its actions against a model before testing
> them against the world... A centralised representation may in fact form the
> core of a robot's affordance model, serving as a global conceptualisation.
MacDorman reasonably mentions that this global conceptualisation does not have
to be geographically central or one big lump...
> MacDorman:
> Simple units together constitute a global conceptualisation to the extent that
> their separate interactions foster global coherence and integration among
> separate bits of information.
> ...
> Implementing it with a traditional symbol system results in the frame problem.
> This is not only because it creates a processing bottleneck, but because ...
> the fact that they only have syntactic constraints means that they can
> represent anything that is logically possible including a limitless number of
> absurd concepts.
MacDorman then mentions two ways in which biological systems (and particularly
humans) do not suffer from the frame problem. The first is that reasoning is
empirically and functionally constrained, such that physically unreal
possibilities are not even considered. And second is that we are able to
automate and parallelise routines actions, so that they don't take up
conscious thought. The example MacDorman gives is that of walking, which
takes all a child's concentration when first learned, but quickly with
practice becomes an automatic procedure to such an extent that we can
talk, play games, kick a ball etc. at the same time.
So the system which MacDorman wishes to propose needs to combine the
advantages of the subsumptive architecture (which alone is like driving a
car by instinct, but not being able to learn anything new) ie. having
habitual parallel behaviours, with those of the global conceptualisation
(which alone is like driving without ever having practiced driving
before).
MacDorman's system also needs some way of modelling affordances...
> MacDorman:
> An intelligent robot can discover the various interactions and effects that
> its environment affords by learning spatiotemporal correlations in its
> sensory projections, motor signals, and internal variables. These
> correlations are a kind of embodied prediction about the future...
> Currently active predictions constitute the robot's affordance model.
Then if a predication fails to produce the expected result, errors divert
attention to the possible miscategorisation to aid the process of learning
affordances.
MacDorman then introduces his robot, which I will have to call Psi-ro for lack
of greek symbols. The idea is that it trundles around an environment with
other robots which are either 'food' or dangerous to it. It uses certain
algorithms for identifying moving objects then decomposing the image into
a signature in the form of a wavelet coefficient, which may be similar to
processes in the brain. When it makes contact with an object it turns the
sensorimotor feedback into a categorical representation (as discussed
before Easter in Harnad's paper on the symbol grounding problem) by taking
the invariant and identifying signatures from those it has been
accumulating. Once it has learned some in this way Psi-ro can begin to
predict affordances, only learning if it miscategorises. In a similar way,
Psi-ro learns the effects of its movements on its environment by
developing predictions based on past experience of how motor signals
affect sensory projections. In this way, Psi-ro is learning empirical
constraints. It can use these predictions to analyse chains of actions, so
can plan ahead by searching through the phase space for the most efficient
path to its goal.
We can see then that Psi-ro does implement MacDorman's proposal of using a
global conceptualisation, modelling future actions before implementing
them. It would be interesting to see how Psi-ro performs under these
constraints in its environment, because in a simplistic system it would
have made sense to tell the robot to avoid dangerous robots as an extra
constraint when searching for the best path (the equivalent to MacDorman's
example of modelling the consequences of walking off a cliff before
actually doing it). But although he doesn't mention it, MacDorman presumably
didn't do this so that Psi-ro would learn that certain robots were dangerous
and if it got too close to one it would change its goals to move away. There
are quite a lot of details MacDorman doesn't mention, like how other robots
are identified to Psi-ro as dangerous or beneficial, or how it decides on its
current goal. He also doesn't mention any implementation of his interesting
idea of parallelising habitual actions, which is disappointing. So overall
MacDorman makes some interesting proposals about grounding robotic
systems, using empirical constraints in learning, using global
conceptualisation for planning, parallelising habitual behaviour, but he
only goes part way to putting them into practice with Psi-ro.
Butterworth, Penny
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT