Re: Shanahan: Robotics and Common Sense

From: Hosier Adam (
Date: Sat May 12 2001 - 17:44:55 BST

Hosier: <>
Sloss: <>

The following comments are in reply to the report by Finn Sloss on the
Shanahan paper "Robotics and Common Sense".

>The benefit of having the robot interact with its world by using logic
>sentences generated from its inputs, is that it completely eliminates the
>frame problem.

Is the frame problem really eliminated? It would seem to me that the
frame problem has not actually been encountered in this system, and if
it were the system would not solve it. I believe that in order to solve
the frame problem a system must have the ability to reason and come to
'common sense' conclusions.
The Shanahan robot simply does not do this. If you agree that simply
crunching through logical formulas, wherever they come from, is not
cognition then there is no question that the Shanahan robot is not
thinking, reasoning or any of the other functions that would be needed
in order to attempt a frame problem solution. An example of a frame
problem in this situation would be to move the Shanahan robot within
it's landscape and see if it could deduce that 'itself had been moved
within the world' or whether it falsely deduced that 'the world had

>While the robot can navigate its world, it still cant decide why it wants
>to travel to the particular location.

Another example of how the robot system does not 'completely eliminate
the frame problem' is shown above in the author's words.

>By utilizing the formal logic described in this paper, any new objects that
>are encountered are instantly incorporated into the robot's description of
>the world, therefore it can cope with new objects or being immersed into a
>strange new environment.

However this is not cognition and only a limited ability to handle a
single variance, (physical shape), within the real world. In a way it
could be said that the system handles the frame problem - because in
this limited situation, the system does not meet the frame problem.

>Initially the robot will think it can move anywhere because the initial map
>is empty, but as it bumps into objects it will correct its choices as to
>where to move next, eventually being able to avoid collisions

A neural network given a simple data representation of the world, (for
instance a 2 dimensional grid reference for each 'bump'), could perform
the same task. The fact that the robot can move within a real world and
sense real bumps does seem to give the system a kind of 'real' ability,
which a simulated system does not have. But at the same time, neither
system can be argued to be performing cognition.

>A kitchen robot could be told to make a coffee, in the case of a standard
>robot, if the coffee cup is not in its predefined location, the robot could
>fail at the task. Using the logic framework described in Shanahan's paper,
>the robot would be able to acquire the new location of the cup and
>successfully interact.

This is assuming that the robot has the ability to distinguish the cup
from the rest of its surroundings - which would be the real AI triumph
in this system. As well as this, a robot that had to bump its way
around its environment each time it looked for an item might be amusing
but not very quick.

>The addition of a range sensor would allow the robot to avoid actually
>crashing into the object before it does any damage, and a camera could
>allow the robot to build up the shapes of objects more quickly.

The addition of the camera would make an interesting experiment. The
ability of the robot to map its 'bump sensed world' onto the 'camera
viewed' world or even create any kind of relationship between the two
could yield some very conclusive results.

This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST