Re: Harnad: The Symbol Grounding Problem

From: Watfa Nadine (
Date: Wed May 30 2001 - 11:54:54 BST

>Yet to me this poses the problem of whether we as human
>beings follow rules or merely act in accordance with them,
>surely some of our human characteristics are following rules
>explicitly as they are built into our genetic profile, ie. the hairs
>on our arms standing on end when we are cold. Yet can the
>human mind be seen as explicitly following a set of pre-set rules
>defined by genetic knowledge ? or are we merely acting in
>accordance with rules build up subconsciously throughout our
>lifetime ?

It is true that some of our human characteristics are following
rules built into our genetic profile. We cannot control our genetic
aspects as that is how we were created. Like the example of the
hairs on our arms standing on end when we are cold, this is
controlled by a physiological control mechanism, which is part of
our genetics and cannot be altered. But the way in which we
think and act, and the things we do, are related in some way to our
external environment.

>I like the approach Harnad has taken to this problem and I agree
>with him that a hybrid system is probably the best route for AI
>to take now.

This is not actually a satisfactory solution to the grounding
problem as it only addresses part of the problem. It has been
argued by Chalmers (1992), Dorffner & Prem (1993), and
Sharkey & Jackson (1994) that the grounding problem is not just
restricted to symbolic representations and hence cannot be solved
by just a hybrid system. Physical grounding such as hooking an
agent to its external environment by the means of sensors and
actuators does not however ground behaviour or internal
mechanisms, as the sensors and actuators are extrinsic to the agent

Ziemke (1997) believes that machines have to 'hooked' to the
external world possibly through causal connections which will
allow the internal mechanisms to interact with their environment
directly without external observation. But then the question that
needs to be asked is will this necessarily make the machines
"intelligent" and allow them to "think" for themselves? It is
certain that in the attempts to create an agent on the basis of
human beings, it is essential that they are able to interact with
their environment.

>I believe that human beings are a combination of sensorimotor
>input and a symbol system.

If human beings were just a combination of sensorimotor input
and a symbol system, why has a device not yet been developed
that is totally indistinguishable from a human being using this
theory? Robotic agents have been developed with capacities such
as sensors embedded into them, yet this still is not something that
is intrinsic to the robot. Something else needs to be implemented
to enable the agent to be fully grounded so it is able to interact
with its environment. Maybe the answer is the addition of causal
connections, but it seems that the correct solution is yet to be
identified otherwise we would have our answer to AI.

>But do we fully understand the programs we create ? I do not
>think we do, and until we do fully understand the programs we
>create then I do not think we will fully achieve AI's aims.

We have to understand the programs we created because we are
the ones who designed and created them. In order to design and
create a program you need to understand the causal processes
involved. Shynn says that until we do understand the programs
we cannot fully achieve AI's aims. But the reason we have not
fully achieved AI's aims is because a causal system has not yet
been created that can do what we can do and has not answered
AI's How? question: "what is it that makes a system able to do
the kinds of things normal people can do?"

This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST