On Tue, 16 May 2000, Butterworth, Penny wrote:
> "Grounding Symbols through sensorimotor integration", Karl F. MacDorman
> http://www.cogsci.soton.ac.uk/~harnad/Temp/CM302/macdorman.pdf
>
> there is apparently some debate in the field
> of Psychology about Gibson's proposal, and whether the human brain actually
> recognises affordances before classifying an object (ie. automatically
> realising something as a place to sit before recognising it as a chair or
> ledge). But MacDorman simply uses the term to refer to the opportunity of
> taking some action, and what level of 'consciousness' performs the
> recognition of these affordances is not really relevant to MacDorman's
> robot.
That's right. The debate in Psychology is about the scope of affordances
-- how much of what the environment "affords" to a user is simply
"picked up" by its sensory system and brain, and how much needs to be
worked on, processed, computed.
But MacDorman is basically right that it is real-world interactive
affordances to an autonomous, embodied agent that will be the basis of
the grounding of its internal symbolic states.
> The problem... is that (prewired) robots cannot adapt to changing
> affordances, and generally aren't competant enough. A simple graft of a symbol
> system on top does not help, because the symbol system is still only using
> internal syntactic constraints.
Correct.
> MacDorman mentions two ways in which biological systems (and particularly
> humans) do not suffer from the frame problem.
Kid-Sib: What IS the frame problem, auntie Penny?
> The first is that reasoning is
> empirically and functionally constrained, such that physically unreal
> possibilities are not even considered. And second is that we are able to
> automate and parallelise routines actions, so that they don't take up
> conscious thought. The example MacDorman gives is that of walking, which
> takes all a child's concentration when first learned, but quickly with
> practice becomes an automatic procedure to such an extent that we can
> talk, play games, kick a ball etc. at the same time.
It seems to me the frame problem is that a symbol system alone cannot
second-guess everything about the real world. Maths has no frame
problem: Paeano's axioms and theorems can second-guess everything about
numbers a priori. But symbols can't do the same for all of a robot's
potential real-world interactions.
> So the system which MacDorman wishes to propose needs to combine the
> advantages of the subsumptive architecture (which alone is like driving a
> car by instinct, but not being able to learn anything new) ie. having
> habitual parallel behaviours, with those of the global conceptualisation
> (which alone is like driving without ever having practiced driving
> before).
Easy to say, harder to do...
> > MACDORMAN
> > An intelligent robot can discover the various interactions and effects that
> > its environment affords by learning spatiotemporal correlations in its
> > sensory projections, motor signals, and internal variables. These
> > correlations are a kind of embodied prediction about the future...
> > Currently active predictions constitute the robot's affordance model.
Just as important as the correlations (which are merely quantitative) is
the (iconic) shape-matching between the world and internal states;
symbols alone are arbitrary in shape.
> Then if a prediction fails to produce the expected result, errors divert
> attention to the possible miscategorisation to aid the process of learning
> affordances.
So far this could just be an error-correcting servo or neural net
or symbolic learning algorithm. What grounds it it is part of an
autonomous, embodied interaction with the real world.
> So overall
> MacDorman makes some interesting proposals about grounding robotic
> systems, using empirical constraints in learning, using global
> conceptualisation for planning, parallelising habitual behaviour, but he
> only goes part way to putting them into practice with Psi-ro.
I agree. But we are still in the toy-world for some time to come...
Stevan
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT