From: Crisp Jodi (email@example.com)
Date: Thu Mar 01 2001 - 20:25:52 GMT
Rethinking Grounding - Tom Ziemke
> The purpose of this paper is to re-examine the problem of
Grounding is the central concept to the whole paper, and is thought of as
the most important question about the possibility of AI by some
people. Grounding seems to be defined as 'hooking' symbol manipulation to
> It is rather obvious that your thoughts and actions are in fact
> intrinsic to yourself, whereas the operation and internal
> representations of a pocket calculator are extrinsic and ungrounded.
Although thoughts and actions may be intrinsic to yourself, this is not
really "rather obvious". There are many arguments to do with why thoughts
are intrinsic (such as Descartes' Cogito), but actions often seem less
intrinsic. Similarly, there are also arguments for why thoughts are
not intrinsic, and since the paper seems to rely on the fact that they are
a lot, it may be important to decide whether they actually are or not.
Also, the matter of free will needs to be discussed. It can be argued that
nothing is actually intrinsic to ourselves, since intially all behaviour
is hereditary and instinctive, and when we are introduced to our
environment, our actions are beyond our control, and their outcome affects
the way we use the rules, evolving our behaviour deterministically: it is
difficult to see how self-determination can ever enter this equation even
in a human. Therefore, for a robot, actions could be thought of as
possibly being able to be just as intrinsic.
[Ziemke: Searle's Chinese Room argument]
> Chinese-speaking observers outside the room could very well conclude
> that the person in the room in fact does understand Chinese.
The Chinese Room example describes symbol manipulation by demonstrating a
person not speaking Chinese as still being able to follow rules to
produces answers written in Chinese. It seems a bit strange that
Chinese-speaking observers could conclude the person did understand,
because they would be unlikely to conclude a calculator understands the
sums it does, and there does not seem to be much difference. The person in
the room is simply following a set of rules, just like a calculator. The
Chinese room example is still highly important though, since it tends to
contradict the penpal version of the Turing test quite well.
> According to Searle, this [AI programs not understanding what they are
> doing] is mostly due to their lack of intentionality,
> ie. their inability to relate their arbitrary internal symbols to
> external objects or states of affairs.
The lack of intentionality may be a main difference between 'intelligent'
beings and those that are not (or at least that aren't as intelligent). An
example is the Sphex wasp, which has rigid, instinctive behaviour:
' The female Sphex digs a burrow, stuns a caterpillar,
drags it to the edge of her burrow, goes in for
a final check, drags the caterpillar in, then lays
her eggs next to it. If you move the caterpillar away,
when she comes back for an inspection, she will drag
it close again, and repeat the whole process. It will
never occur to her to drag it straight
in and skip the reinspection' - www.emazing.com
This can be contrasted with a baby that wants something, to start with
they have the rules that they follow, such as when they want food, they
cry, and then in time adapt this to get what they want even when
situations change.The Sphex wasp does not seem to be able to ground their
world properly, possibly, despite what they want.
> The main reason strong AI has had little to tell us about thinking,
> is because it has nothing to tell us about machines.
Searle seems to assert that brains are machines, and only brains, or
machines with the same causal powers as brains can think. Brains could
well be machines that follow layers of rules programmed by our genes, but
also shaped extensively by our external experiences. Thus, machines with
the same causal powers also may need to be programmed to follow rules by
their built-in traits, but also need access to external experiences.
> It will be parasitic on the fact that the symbols have meaning for us
> [ie. the observers], in exactly the same way that the meaning of the
> symbols in a book are not intrinsic, but derive from the meaning in our
We interpret the symbols in a book in such a way that they become
meaningful to us- the symbols in the book will be matched to symbols in
our heads that we have learnt to read, and then from there, these symbols
in our heads will set off other rules about that symbol representation,
but these rules have been developed from the ability to ground the
symbols- ie, remembering a scenario that happened to us, the memory
of which was triggered by interpreting the symbols in the book.
Everyone derives meaning from the symbols they see, but the problem comes
in that everyone derives a different meaning from the same symbols, since
no-one is likely to ground things in exactly the same way. For example,
Bertrand Russell gives the example that everyone sees colours differently-
what looks blue to one person, that they call "blue" may actually look
exactly like someone else's interpretation of green, but there's no way of
telling, since people just use the same word when they see a colour. It
doesn't matter what the colour actually looks like to them, since
they are still able to communicate about it. For example, in Latin, the
word "flavus" means yellow, but over the centuries it has mutated to the
German word "blau" meaning blue. Obviously, a certain amount of
subjectivity has crept in at points along the line.
This could be applied to Searle's Chinese Room - the person would be
unable to ground the symbols to the exact object that the Chinese
intended, but they might still be able to ground it to something
arbitrary, and still correctly apply the rules and therefore, may still be
intelligent, since no-one actually grounds symbols to the same thing
anyway. It could then be argued that it doesn't matter what symbols are
grounded to, as long as they are grounded.
There is also the problem that we don't know that other people are
actually grounding symbols anyway, due to the problem of the existence of
other minds. It has been argued by Descartes and Gilbert Ryle that the
existence of minds other than our own is dubious. We could then wonder if
we ourselves actually ground symbols - but it would be hard to imagine
someone who could think without having ever sensed anything, or at least
had some sort of hereditary sense impressions. Therefore, it should be
assumed that grounding probably is essential to thinking, or at least,
our own grounding is, since it is not really that important to us if other
people ground or not. If we hadn't experienced the external world, yet
were given symbol manipulation rules, some people may believe we were
still thinking, if say, we had dreams which these rules were manipulated
Ziemke, quoting Sharkey and Jackson:
> Machines have to be 'hooked' to the external world in some way,
> ie. there have to be causal connections, which allow the internal
> mechanisms to interact with their environment directly and without being
> mediated by an external observer.
It could be argued that it doesn't actually matter whether there is an
external observer mediating or not. For example, with Descartes' example
of the evil demon, even if we are just brains in vats, we are still sent
sensory information which we can then believe we are grounding, and
therefore whether there is an external observer or not does not matter in
respect to whether we are intelligent or not.
> Approaches to grounding
There are two approaches to grounding - the cognitive view and the
enactivist view. Cognitivism suggests that thoughts are separate from
interaction with the external world, where as the enactivist viewpoint is
that cognition depends upon external experiences, which cannot be
> Fodor's distinction into input systems (eg. low-level visual and
> auditory perception) and central systems (eg. thought and problem
Cognitivism grounds by hooking internal representations with external
entities, and then internal complexity is built up from there.
> Harnad proposed a hybrid symbolic/connectionist system in which symbolic
> representations are grounded in non-symbolic representations of two
> types: Iconic representations, which are basically analog transforms of
> sensory percepts, and categorical representations, which exploit
> sensorimotor invariants to transduce sensory percepts to elementary
Since humans do not tend to just ground in a way that needs the actual
existence of the objects, categorical representation allows for imaginary
objects that people could create from combining inital sensory
experiences. For example, a unicorn could be created with a horse and a
horn. Without this ability, anything we just imagined would mean we were
not using our understanding.
> Regier trained structured connectionist networks to label sequences of
> two-dimensional scenes, each containing a landmark and an object, with
> appropriate spatial terms expressing the spatial relation of the two.
The paper goes on to say that the machine can only ground the things that
it has been told how to, and this then returns us to intentionality- the
machine does not use the information to actually do anything, it is just
'parasitic on the interpretation in our heads.' Machines do not seem to
desire to work anything out, it is just us that wants the machine to do
something which we then interpret for ourselves. The act of labelling
doesn't have a functional value to the system, so is not intrinsic, due to
the fact that it lacks purpose for the system itself.
> Regier's transducer could be compared to an artificial heart: its use
> could still be intrinsic to an overall system, ie. a human (to the
> extent that it offers the same functionality as a natural heart),
> itself however could probably never be.
It could be argued that even a human brain is made up of components- for
example, cutting out bits of the brain doesn't mean that it is no longer
intelligent, since it can still function, but one bit of the brain on its
own could be doubted as being intelligent. It is difficult to know where
to draw the line.
> A cognitivist grounding theory can, however, not be considered complete
> as long as it only explains the grounding of individual atomic
> representations but neither the transducing input system itself, nor its
> interdependence with its environment and computational central systems.
It may explain how an individual component can be grounded, but not the
system as a whole.
> Cognition depends upon the kinds of experiences that come from having a
> body with various sensorimotor capacities, and second that these
> individual sensorimotor capacities are themselves embedded in a more
> encompassing biological, physiological, and cultural context.
Enactivism seems to disagree with cognitivism in the way that you can't
really ground an individual component, since you need the information of a
system as a whole, since they probably developed together.
> Physical grounding, however, only offers a pathway for hooking an agent
> to its environment, it does, by itself, not ground behaviour or internal
Although robotic agents can be physically grounded, behavioural and
internal mechanisms still seem to be separate.
> Enactive systems typically consist of a number of behavioural subsystems
> or components working in parallel from whose interaction the overall
> behaviour of a system emerges.
Intelligence in this case is seen as an emergent property of the
interaction of numerous systems, and is not specified by an individual
component being intelligent.
> The question here is where to start grounding and where to end it?
Since for every layer of experience we ground, it is possible to ground
the next layer up, complete grounding given such recursion could then be
impossible. It could be argued that it is the same for humans, and that
humans may suffer from the grounding problem also - each symbol
or experience is grounded with another, and this can even go back to
genetic information, and thus further back to the start of creation,
> The control of a simple robot that wanders around avoiding obstacles
> could emerge from one module making the robot go forward and a second
> module which, any time the robot encounters an obstacle, overrides the
> first module and makes the robot turn instead.
It could be said that even babies start off by overriding modules in their
initial programming. If they are hungry, they will cry, but if there is an
obstacle to stop them from getting food, they will adapt and attempt to
get it another way. The Sphex wasp does not appear to be able to override
modules given any amount of time.
> The result of the transduction (ie. the system's actions) could be
> considered grounded, the transducer itself however (ie. the agent
> function as composed of the behavioural modules and their
> interconnection) is in no way intrinsic to the system.
This may not be such a great problem, since even for babies it is
genetically preimposed and not really intrinsic.
> The problem of design, however, remains to some degree, since by choice
> of architecture (including number of hidden units, layers, etc.) the
> designer will necessarily impose extrinsic constraints on the system, in
> particular when designing modular or structured connectionist networks.
This could be compared to humans, who seem to have extrinsic constraints
imposed on them, through their genetic make-up, etc.
> Sensorimotor mapping is actively constructed in every time step by a
> second connectionist net (the 'context net').
Evolution could be seen as a second connectionist net, since if an
organism isn't effectively carrying out its purpose, it dies, and ones
that are more effective get to breed. Just as a connectionist net checks
which rules are working and which aren't, and thus eliminates ineffective
rules, reweighting in favour of effective ones.
> Physiological grounding is provided through the co-evolution and mutual
> determination of agents/species and their environments.
This is an important point to note, since the true intentionality may be
hard to find, especially to designers and buyers. Evolution has taken much
time, and many minor considerations and adaptations have happened, so
trying to immediately give intentionality to a sensorimotor equipped
robot may prove to be a fool's errand.
> Both approaches require fully grounded systems to be 'complete agents'.
> Both approaches require a certain degree of bottom-up
> Both grounding approaches require their agents to have robotic
This conclusion seems adequate to both cognitivists and
enactivists. Although actually how to successfully include these in an
implementation is, of course, questionable. But, there is already a
prototype that exists in this form, ie. humans.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:19 BST