From: Shynn Chris (cs698@ecs.soton.ac.uk)
Date: Wed May 30 2001 - 15:44:04 BST
>> Shynn:
>> Yet to me this poses the problem of whether we as human beings follow
>> rules or merely act in accordance with them, surely some of our human
>> characteristics are following rules explicitly as they are built into
>> our genetic profile, ie. the hairs on our arms standing on end when
>> we are cold. Yet can the human mind be seen as explicitly following a
>> set of pre-set rules defined by genetic knowledge ? or are we merely
>> acting in accordance with rules build up subconsciously throughout
>> our lifetime ?
>
> Watfa:
> It is true that some of our human characteristics are following rules
> built into our genetic profile. We cannot control our genetic aspects
> as that is how we were created. Like the example of the hairs on our
> arms standing on end when we are cold, this is controlled by a
> physiological control mechanism, which is part of our genetics and
> cannot be altered. But the way in which we think and act, and the
> things we do, are related in some way to our external environment.
Shynn:
I agree with Watfa here, in that we are not just a set of rules built
into our genetic profile and that humans build rules to work by through
interaction with our external environment and other people. If humans
were solely a set of genetic knowledge then software engineers would
already be able to produce a thinking being as all our knowledge would
be comprised in a set that could be programmed into a computer. This
however doesnt bring in the fact that babies learn, and that they learn
their rules of living from birth through interaction. This can be seen
in the way identical twins grow up, as although they have exactly the
same genetic code they are living in slightly different environments and
so differ from each other proving that living environment must play a
part in human learning.
>> Shynn:
>> I like the approach Harnad has taken to this problem and I agree with
>> him that a hybrid system is probably the best route for AI to take
>> now.
>
> Watfa:
> This is not actually a satisfactory solution to the grounding
> problem as it only addresses part of the problem. It has been argued
> by Chalmers (1992), Dorffner & Prem (1993), and Sharkey & Jackson
> (1994) that the grounding problem is not just restricted to symbolic
> representations and hence cannot be solved by just a hybrid system.
> Physical grounding such as hooking an agent to its external
> environment by the means of sensors and actuators does not however
> ground behaviour or internal mechanisms, as the sensors and actuators
> are extrinsic to the agent itself.
Shynn:
Although I agree with Watfa on this point as well, I still believe that
a hybrid system is the way to move AI forward as although a hybrid
system may still encounter the problems that Watfa describes above it
would still move AI foward significantly. Looking at this point from a
reverse engineering point of view it could be seen that human senses
are extrinsic to our brains but humans are still able to ground symbols
by way of learning and grounding them in sensorimotor input. For example
if a human has absolutely no idea what an umberella is that that symbol
is not grounded but if they do know about other description symbols, such
as natural language then an umberella could easily be shown to the human
and grounded in that language, the same way that the umberella could be
shown to a system and grounded in the computers natural language. I think
that a small set of descriptors is needed to start with so that all other
symbols may be built up from there, the same way as the english language
is built up from the 26 letters of the alphabet.
>> Shynn: I believe that human beings are a combination of sensorimotor
>> input and a symbol system.
>
> Watfa:
> If human beings were just a combination of sensorimotor input and a
> symbol system, why has a device not yet been developed that is totally
> indistinguishable from a human being using this theory? Robotic agents
> have been developed with capacities such as sensors embedded into them,
> yet this still is not something that is intrinsic to the robot.
> Something else needs to be implemented to enable the agent to be fully
> grounded so it is able to interact with its environment. Maybe the
> answer is the addition of causal connections, but it seems that the
> correct solution is yet to be identified otherwise we would have our
> answer to AI.
Shynn:
Although Watfa makes a good point here in that if humans are just symbol
systems and sensorimotor input why hasnt an artificial human been
developed. I think this may be answered however by looking at the fact
that no one, to my knowledge, has attempted to create an artificial human
baby and I think that that is what must be done. I believe that if an
artificial baby was made with a combination of sensorimotor input and a
symbol system interpreter with a small set of grounded symbols and that
this baby was given a programmed desire to learn and copy other people so
that it could learn at the same rate as a normal human baby then I think
that it would grow in the same fashion as a human baby and learn to ground
more symbols, and learn to think in its own right. I think that the reason
we have not seen a fully functional artificial human is because software
engineers are trying to program everything the system needs which is too
ambitious.
>> Shynn:
>> But do we fully understand the programs we create ? I do not think we
>> do, and until we do fully understand the programs we create then I do
>> not think we will fully achieve AI's aims.
>
> Watfa:
> We have to understand the programs we created because we are the ones
> who designed and created them. In order to design and create a program
> you need to understand the causal processes involved. Shynn says that
> until we do understand the programs we cannot fully achieve AI's aims.
> But the reason we have not fully achieved AI's aims is because a
> causal system has not yet been created that can do what we can do and
> has not answered AI's How? question: "what is it that makes a system
> able to do the kinds of things normal people can do?"
Shynn:
Why should humans always understand what they have created ? a programmer
may design a program to perform one function but a totally different
function may be the end result, the programmer didnt plan it and doesnt
understand it, yet he still designed and created it. I stick with my
ealier supposition that untill we fully understand all the nuances of
computer programming and exactly why things do what they do then we will
never be able to create true AI systems. Another thing is that there are
so many different programming languages and programmers that there is no
standard for programming over the entire world and so what one programmer
writes may be total gibberish to another programmer and so understanding
is again lost.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST