Re: Edmonds: Constructibility of AI

From: Sloss Finn (
Date: Mon May 07 2001 - 17:35:01 BST

> Abstract

> The Turing Test, as originally specified, centres on the ability to
> perform a social role. The TT can seen as a test of an ability to
> enter into normal human social dynamics. In this light it seems
> unlikely that such an entity can be wholey designed in an `off-line'
> mode, but rather a considerable period of training in situ would be
> required. The argument that since we can pass the TT and our cognitive
> processes might be implemented as a TM that, in theory, an TM that
> could pass the TT could be built is attacked on the grounds that not
> all TMs are constructable in a planned way. This observation points
> towards the importance of developmental processes that include random
> elements (e.g. evolution), but in these cases it becomes problematic
> to call the result artificial.

> The elegance of the Turing Test comes from the fact that it is not a
> requirement upon the mechanisms needed to implement intelligence but
> on the ability to fulfil a role.

Unfortunately inteligence can be defined in many different ways. If
inteligence is defined as the ability to answer questions correctly, a
pocket calculator could be considered exceedingly inteligent, because it
would never get a calculation wrong (assuming that the device had no
techinical flaws). To impose the 'fulfilling a role' criterion, limits
the overall abstract view of inteligent entities a great deal, and
provides us with a better 'yard-stick' with which to measure amounts of

> Turing specified the niche that intelligence must be able to occupy
> rather than the anatomy of the organism. The role that Turing chose
> was a social role - whether humans could relate to it in a way that
> was sufficiently similar to a human intelligence that they could
> mistake the two. What is unclear from Turing's 1950 paper, is the
> length of time that was to be given to the test. It is clearly easier
> to fool people if you only have to interact with them in a single
> period of interaction.
> It is something in the longer-term development of the interaction
> between people that indicates their mental capabilities in a more
> reliable way than a single period of interaction.

Edmonds uses the 'pen-pal' version of the Turing Test throughout this
paper, as a method of testing entities for inteligence. The goal here is
not to measure the level of inteligence, but merely to assertain if an
entity has or does not have inteligence. A common error with the TT is
to give a percentage pass mark for entities taking the test, the test
was designed to be a pass/fail test, where entities that fail, do not
have the required systems or knowlege to behave intelligently. An entity
also fails, if it passes for a while but fails when it runs out of
knowlege or hits the frame problem, in this case the entity was faking
inteligence, not actually having it.

> The ability of entities to participate in a cognitive `arms-race',
> where two or more entities try to `out-think' each other seems to be
> an important part of intelligence.

Conversations can flow from one subject to another, for example, two
humans could be talking about forms of transport, perhaps how bad the
rail system is currently, and the converstion could progress to talking
about a new car that one of the humans has just bought. Entities that do
not fit in the human social sphere of interaction would probably treat
the two subjects as seperate topics, and therefore could fail to show
inteligence by not relating the two areas of conversation.

If at some future point the interaction between two entities covers
'known' ground, the possiblilty of testing previous responces with new
knowlege that has been drawn from the real world, provides increasingly
harder tests on the entity, to prove that it is indeed 'alive', and
gathering new knowlege by living in the world.

> I will adopt a reading of the Turing Test, such that a candidate must
> pass over a reasonable period of time, punctuated by interaction with
> the rest of the world. To make this interpretation clear I will call
> this the "long-term Turing Test" (LTTT). The reason for doing this is
> merely to emphasise the interactive and developmental social aspects
> that are present in the test.
> That the LTTT is a very difficult task to pass is obvious, but the
> source of its difficulty is not so obvious. In addition to the
> difficulty of implementing problem-solving, inductive, deductive and
> linguistic abilities, one also has to impart to a candidate a lot of
> background and contextual information about being human including: a
> credible past history, social conventions, a believable culture and
> even commonality in the architecture of the self.

If all the background and contextual information about being human is
'pre-loaded' into the inteligent system, there will be the chance of
running into the frame problem. This is another point lending towards
the need for inteligent entities having the ability to learn.

> I wish to argue that it is far from certain that an artificial
> intelligence (at least as validated by the LTTT) could be deliberately
> constructed by us as a result of an intended plan. There are two main
> arguments against this position that I wish to deal with. Firstly,
> there is the contention that a strong interpretation of the Church-
> Turing Hypothesis (CTH) to physical processes would imply that it is
> theoretically possible that we could be implemented as a Turing
> Machine (TM), and hence could be imitated sufficiently to pass the TT.
> Secondly, that we could implement a TM with basic learning processes
> and let it learn all the rest of the required knowledge and abilities.
> I will argue that such an entity would not longer be artificial.

In my opinion an entity would require to learn most of its knowlege and
abilities from interaction with the real world. It would be very
difficult indeed to describe all the possible ways in which an entity
could respond to a given situation. Almost everyone has had an argument
at some time in their life, and in an argument things can be done to
diffuse or engourage the argument. The argument might not be about fact,
but just because someone was 'having a bad day', therefore programming
an entity to respond to this situation would be imensly complicated. For
a start the entity would have to choose weather to argue the point
futher, and perhaps become more angry, or to try and comfort and ease
the situation. Lets take the case of a caring social robot, one that
wants at all times to be kind and to avoid arguments, if the robot does
not learn, it is possible that it could have several interactions with a
person, that finds it's attitude patronising, and perhaps eventually
come to harm because of it's inability to change.

> What this shows is that any deterministic method of program
> construction will have some limitations. What it does not rule out is
> that some method in combination with input from a random `oracle'
> might succeed where the deterministic method failed.
> One can easily construct a program which randomly chooses a TM out of
> all the possibilities with a probability inversely proportional to the
> power of its length and this program could pick any TM. What one has
> lost in this transition is the assurance that the resulting TM is
> according to one's desire. When one introduce's random elements in the
> construction process one has to check that the results conform to
> one's specification.
> However, the TT (even the LTTT) is well suited to this purpose,
> because it is a post-hoc test. It specifies nothing about the
> construction process. One can therefore imagine fixing some of the
> structure of an entity by design but developing the rest in situ as
> the result of learning or evolutionary processes with feedback in
> terms of the level of success at the test.

The method of fixing the structure of an entity and then allowing the
developement of the system in the real world seems much closer to how
humans themselves become inteligent. When a baby is born, it can be said
that it is not very inteligent. It only has very basic functionality,
like the knowlege of how to eat, get attention (crying), and recognise
its parents. It also has the inate ability to learn, and like an
unprogrammed neural net there is a lot of space to store learnt
information. As the baby grows older, its thinking structure (brain)
developed and changes dramatically, to incorperate things like speech,
how to walk, how to recognise human emotions, etc.. Comparing this to an
inteligent entity that we could fabricate, the basic structure is
equivalent to dna, it makes the basic system operational. Learning would
then change the system to respond in completely new ways, but the basic
structure (as in humans) is held. Someones upbrining probably acounts
more to weather they will choose to become highly educated - therefore
considered highly inteligent - or if they will do manual labour which
doesnt require so much inteligence. Obviously when the original system /
child is created there are limits to the maximum potential the entity
can achieve. Taking any adult, how much of their inteligence can be
soley attributed to their initial structure? I would argue that past the
basic natural instincts all of the inteligence has come from learning.

> I raised the possibility that an entity that embodied a mixture of
> designed elements and learning in situ, might be employed to produce
> an entity which could pass the LTTT. One can imagine the device
> undergoing a training in the ways of humans using the immersion
> method, i.e. left to learn and interact in the culture it has to
> master.
> However, such a strategy, brings into question the artificiality of
> the entity that results. Although we can say we constructed the entity
> before it was put into training, this may be far less true of the
> entity after training. To make this clearer, imagine if we constructed
> `molecule-by-molecule' a human embryo and implanted it into a woman's
> womb so that it developed, was born and grew up in a fashion normal to
> humans. The result of this process (the adult human) would certainly
> pass the LTTT, and we would call it intelligent, but to what extent
> would it be artificial?

We dont call ourselves artificial just because we go to school, or are
trained in the art of some job. In the case of an inteligent system, the
same would be true, it could not be called artificial after it had
learnt from the real world. The basic structure of the system would
still be artificial, but the inteligence it shows would not. The only
systems we could create, in my opinion, that are 'inteligent' but
artificial would be the system goals of soft AI, that of expert systems,
that seem inteligent but are not truely inteligent. That is to say
outside of the systems scope, it would not appear to be inteligent, but
it would appear inteligent or knowlegeable in its sphere.

> The fact is, that if we evolved an entity to fit a niche, then is a
> real sense that entity's intelligence would be grounded in that niche
> and not as a result of our design. It is not only trivial aspects that
> would be need to be acquired in situ. Many crucial aspects of the
> entity's intelligence would have to be derived from its situation if
> it was to have a chance of passing the LTTT.
> Given the flexibility of the processes and its necessary ability to
> alter its own learning abilities, it is not clear that any of the
> original structure would survive.

Perhaps its the case that the problem lies in the Turing Test; the TT
tries to test inteligent systems to find if they show human inteligence.
If we want to create human inteligence, we already have the necessary
tools and training environment, namely reproduction. We _ARE_ creating
intelinent systems all the time in this way. The real goal we want to
achieve by creating a 'man-made' inteligent system, is to try to
understand ourselves better.

Consider the case that we have created inteligent man-made systems
already, perhaps we are looking for the wrong signs of inteligence. If
we try to establish how inteligent a horse is, we can only try to
identify emotions and inteligent reactions when the horse is subject to
certain situations. The problem we face is the other minds problem, it
is quite easy for us to relate to each other because we have all evolved
the same way (to a degree), we understand when someone is angry, or sad,
because we would behave in the same way. What if, in the case of the
horse, horses talk to each other all the time, and fully interact, what
level of inteligence would a horse give to us using the 'Horse TT'. The
only things the horse could read from us is the same as what we can read
from them - a limites set of reactions.

Now consider the general case of an inteligent system, i will assume
here that the system is some sort of very powerful computer. The system
could be truly alive, and very inteligent, but the only way to know if
the system does experience being alive would be to become that system
and experience it for ourselves. Unfortunately we can not change shape
into a computer system and find that out for ourselves.

> All this points to a deeper consequence of the adoption of the TT as
> the criterion for intelligence. The TT, as specified, is far more than
> a way to short-cut philosophical quibbling, for it implicates the
> social roots of the phenomena of intelligence. This is perhaps not
> very surprising given that common usage of the term `intelligence'
> typically occurs in a social context, indicating the likely properties
> of certain interactions.
> This interpretation of intelligence is in contrast to others who
> criticise the TT on the grounds that it is only a test for human
> intelligence. I am arguing that this humanity is an important aspect
> of a test for meaningful intelligence, because this intelligence is an
> aspect of and arises out of a social ability and the society that
> concerns us in a human one.

Another thing to consider here is is a baby was placed in an atrificial
environment, and brought up by computers - controlled by us but behind
closed doors. Things that we say to the child could be encoded into
another symbolic language. The child would grow up and become
inteligent, it would speak another language and perhaps walk in a
mechanical looking way, how would this child score on the TT?? I believe
that the problem we have with the TT is that we dont know what we are
looking for in an inteligent system, so we look for what we can relate
to. The only way a system can pass the LTTT is if it lives as we live,
learns as we learn, and therefore becomes natural not artificial -
leaving us at square 1, we still wont understant what it is that makes
the system inteligent, because that will come from the natural part of
its system.

Finn Sloss -
University of Southampton
Beng Computer Engineering

This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST