From: Watfa Nadine (firstname.lastname@example.org)
Date: Wed May 02 2001 - 21:00:17 BST
Ziemke's paper re-evaluates the problem of 'grounding' in how the
functions and internal mechanisms of a machine can be intrinsic to the
machine itself. A system has intrinsic meaning if its symbols are
systematically interpretable without the involvement of an external
interpreter. For example, what I think something to mean, does not
depend in any way on what anyone else's interpretation of it means. It
means what it means to me, intrinsically, on its own, independently.
Searle's and Harnad's analysis of the grounding problem are examined, as
well as the different methods to solving it, based on the cognivitist
and the enactive paradigms in computer science. It is argued that, even
though the two differ, they both fail to provide a fully grounded
system. Ziemke's theory therefore, is that grounding should be tackled
by bottom-up development of complete robotic agents in interaction with
Ziemke starts his paper by reviewing the grounding problem according to
Searle and Harnad. Searle's argument was that of the Chinese Room.
>Imagine a person sitting in a room, who is passed (e.g. under the door)
>sequences of, to him / her meaningless, symbols in Chinese. The person
>processes these symbols according to formal rules which are given in his
>/ her native language (e.g. written on the room's walls), and returns a
>sequence of resulting symbols…In reality he / she does of course not
The symbols and the symbol manipulation, being all based on shape rather
than meaning, are systematically interpretable as having meaning. But
the interpretation will not be intrinsic to the symbol system itself,
but instead is because the symbols have meaning for us. A further
example from this is a book: the meanings of the symbols in the book are
not intrinsic, but derive from the meanings in our heads.
Ziemke goes on to Harnad's argument, that of the Chinese/Chinese
>In his formulation of the 'Symbol Grounding Problem' Harnad compared
>purely symbolic models of mind to the attempt to learn Chinese as a
>first language from a ChineseChinese dictionary.
If you attempted to look up what a word meant using a Chinese/Chinese
dictionary, with no prior knowledge or understanding of Chinese, you
would never find your answer, as you'd just keep proceeding from one
definition to another, with each being nothing more than a meaningless
So how come the symbols in our mind mean something? It is because some
of these symbols are connected to the things they stand for by the
sensorimotor mechanisms that detect and recognise those things. Then a
dictionary is built into our minds from the grounded basic vocabulary,
by combining and re-combining the symbols into higher-order categories.
>A number of approaches to grounding have been proposed, all of which
>basically agree in two points:
>1) Escaping the internalist trap has to be considered "crucial to the
>development of truly intelligent behaviour" (Law & Miikkulainen, 1994).
>2) In order to do so, machines have to be 'hooked' (Sharkey & Jackson,
>1996) to the external world in some way, i.e. there have to be causal
>connections, which allow the internal mechanisms to interact with their
>environment directly and without being mediated by an external observer.
I think the question to ask here is that in the case of 'hooking' the
machines to the external world, will this necessarily make them
intelligent? How will this enable them to think for themselves?
The question of what exactly has to be hooked to what and how, however,
divides the two different approaches to grounding, cognitivist grounding
and enactivist grounding, and is discussed later by Ziemke. Firstly he
distinguishes between cognivitism and enaction.
>Cognitivism is based on the traditional notion of representationalism
>(Fodor, 1981; Fodor & Pylyshyn, 1988), characterized by the assumption
>of a stable relation between manipulable agent-internal representations
>('knowledge') and agentexternal entities in a pregiven external world.
My understanding of cognition is based on the relation between internal
representations and the manipulation of those internal representations.
Unlike with enaction, knowledge is independent from interaction with the
external environment, and consists of explicit, manipulable, internal
representations. The distinction is made between perceptual input
systems and central systems.
>The enaction paradigm on the other hand, emphasizes the relevance of
>action, embodiment and agentenvironment mutuality.
Enaction, alternatively, has a completely different view, in that
cognition is dependent on the external environment, i.e. linked to
histories that are lived, possibly through sensorimotor interaction with
the environment. It is the de-emphasis of representationalism.
Ziemke continues by defining cognivitist grounding:
>Cognivitist grounding approaches typically focus on input systems
>grounding atomic representations in sensory / sensorimotor invariants.
>That means, here the required causal connection between agent and
>environment is made by hooking atomic internal representations to
>external entities or object categories.
Unlike Harnad's example of the Chinese/Chinese dictionary, this is why
the symbols in our mind mean something. Some symbols are connected to
the things they stand for by sensorimotor mechanisms that detect and
recognise those things. Then a dictionary is built in our minds from
the grounded basic vocabulary, by combining and re-combining the symbols
into higher-order categories.
>A typical example (of cognivitist grounding) is the work of Regier
>(1992), who trained structured connectionist networks to label sequences
>of two-dimensional scenes, each containing a landmark and an object,
>with appropriate spatial terms expressing the spatial relation of the
>two (e.g. 'on', 'into', etc.). Or, in Regier's words: "the model learns
>perceptually grounded semantics".
One major problem seen by Ziemke in this case, is that by Reiger in
designing his transducer, he put a lot of his knowledge into it, and is
therefore said to extrinsic to the overall system.
Ziemke continues by defining enactivist grounding:
>Robotic agents, situated in some environment and causally connected to
>it via sensory input and motor output…hooking an agent to its
Grounding in this case is done by physically connecting the robotic
agents to their environment by means of sensors and actuators. One
example of this is a robot with sensors that allows it to avoid
obstacles. But avoiding the obstacles is not a decision made by the
robot and therefore can in no way be seen as being intrinsic to the
robot, but instead completely extrinsic.
Ziemke concludes not by stating which of the two, cognivitism or
enaction, is the correct approach, but by defining the points where the
two approaches converge:
>Both approaches require fully grounded systems to be 'complete agents'.
>Both approaches require a certain degree of bottomup development /
>Both grounding approaches require their agents to have robotic
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST