From: Basto Jorge (jldcb199@ecs.soton.ac.uk)
Date: Sun Jun 03 2001 - 17:16:53 BST
Basto:
Edmonds claims that common sense dichotomy of Artificial opposed to
Organic should be reconsidered since it seems to be possible to go from
one to the other -an artificial man-made system can become intelligent
if given the conditions to evolve and develop with proper interaction
from the environment; Edmonds claims Intelligence is at least partially
grounded "beyond" the intelligent system/architecture and therefore the
social infrastructure and the epi-"systemic" factors are a required
condition for its emergence. With this he concludes that the Turing
machine is no adequately complete to replicate intelligent systems on
the basis that it is purely a computational function-replication tool.
The Turing Test is said to be an accurate empirical test since it
includes factors outside the purely computational ones.
EDMONDS:
The elegance of the Turing Test comes from the fact that it is not
a requirement upon the mechanisms needed to implement intelligence
but on the ability to fulfill a role. In the language of biology,
Turing specified the niche that intelligence must be able to occupy
rather than the anatomy of the organism.
Basto:
The Turing Test is a test on reverse engineered mind models and how
those models are able to become functionally indistinguishable from the
human mind's observable functionality. The Turing Test does not care
about "what is intelligence" since this can be provisionally answered
by whatever entity succeeds in passing the test. According to the
subjacent point of view expressed by the Turing reasoning, to dwell
further on the quest for intelligence, that is, to use any criteria
other than functionality is to became vulnerable to the other minds
problem. The ability to fulfill a social role as the criteria for
intelligent behavior separates structure from function as social roles,
although existing because of biological mechanisms, are not completely
dependent on the biological constraints that are their cradle. That is
to say that we have the possibility of engaging on social issues
because our genotype infrastructure allows it, but those social
behaviors develop into an entity of their own that certainly extends
or/and complements our biological constraints beyond the gene-rules.
EDMONDS:
What is unclear from Turings 1950 paper, is the length of time that
was to be given to the test. It is clearly easier to fool people if
you only have to interact with them in a single period of
interaction.
Basto:
A Turing Test hierarchy was introduced recently to overcome this and
some other incomplete or misunderstood issues raised by the original
paper. It is however my impression after reading the original paper
that the test is meant to be something more than just a short-term
trick as Turing couldn't possible be thinking that he could test a
system on 5 minutes of functional indistinguishability and call it an
intelligent system. The proposed Turing Test hierarchy mirrors this by
stating that t1, the starting point of the hierarchy, consists of "toy
models" that represent fragments of our total cognitive capacities, but
it is thought not to be part of the Turing Test class; The lowercase t
symbolizes the fact the this is nonetheless something less than a true
Turing Test and is indeed an outsider as it is not considered to be an
effective Turing Test. The hierarchy goes up to T5 that represents
internal and external structural indistinguishability, as well as
cause-effect indistinguishability. The level that it is thought to be
the one Turing had in mind is T2, that challenges a model with our
penpal cognitive capacities indefinitely.
EDMONDS:
It is something in the longer-term development of the interaction
between people that indicates their mental capabilities in a more
reliable way than a single period of interaction. The deeper
testing of that abilities comes from the development of the
interaction resulting from the new questions that arise from
testing the previous responses against ones interaction with the
rest of the world. The longer the period of interaction lasts and
the greater the variety of contexts it can be judged against,
Basto:
A long term interaction indicates the degree of intelligence of an
individual but it does not take too long to assert that an individual
has some intelligence (even though some intelligent behaviors we take
for granted). Obviously, the longer the interaction the more
opportunities there are to diversify and create a better portrait of
the subject. That is why t1, the first level on the Turing hierarchy is
not part of a credible intelligence test, since entities/systems that
succeed in passing the t1 test are only showing good performance
replication on a small scale, on a partial fragment of our cognitive
capacities, and for a limited period of time.
EDMONDS:
For the above reasons I will adopt a reading of the Turing Test,
such that a candidate must pass muster over a reasonable period of
time, punctuated by interaction with the rest of the world. To make
this interpretation clear I will call this the long-term Turing
Test (LTTT). The reason for doing this is merely to emphasize the
interactive and developmental social aspects that are present in
the test. I am emphasizing the fact that the TT, as presented in
Basto:
Not clear what is meant by interaction, but it seems that the article
is closely related to what in the Turing hierarchy is called the T2
level test. Now there is evidence that without grounding a system that
passes T2 with sensorimotor mechanisms to interact with the world and
ground the symbolic knowledge within the system itself, the system in
question is subject to the CRA (Searle) counter example and therefore
its potential to develop human-like intelligence is doomed. So it is
not just a matter of succeeding on a life-long pen pal challenge, that
is arguably impossible only by symbol manipulation as it is probably
certain it will stumble into the frame problem at some point in time,
but it is also a matter of the system knowing what those symbols mean
by relating them inside the system to objects from the environment that
are perceptioned with sensorimotor devices. A leap to T3 passing models
that represent the embodied hybrid robotic version of T2, combining the
symbolic system of the later with the dynamic sensorimotor devices of
the former and capable of interacting causally with the world, seems to
be the solution.
EDMONDS:
Turing's paper is not merely a task that is widely accepted as
requiring intelligence, so that a successful performance by an
entity can cut short philosophical debate as to its adequacy.
Basto:
And this is indeed what was shown by Searle as a counter example to the
Turing Test proposed on the original paper and that is equivalent to
T2. Searle's Chinese Room Argument proves that it is possible to have a
system succeeding in passing the T2 test without requiring any
understanding whatsoever. In other words, it is possible to replicate
the intelligent behavior without requiring intelligence, if we assume
machine independence and symbolic manipulation is all that it takes.
Now if we go one step up the ladder of the hierarchy we reach T3, the
embodied version of T2, and we overcome Searle's argument by having
some parts of the system's cognition implementation DEPENDENT of
sensorimotor mechanisms that ground the symbols manipulated from WITHIN
the system, ie, not requiring mediation of an external mind.
EDMONDS:
Rather that it requires the candidate entity to participate in the
reflective and developmental aspects of human social intelligence,
so that an imputation of its intelligence mirrors our imputation of
each others intelligence.
Basto:
Not clear what Edmonds means by "human social intelligence" here.
Either there is ONE definition of intelligence on which intelligence
can be dependent by a large extent on social skills, or we have
separate definitions of intelligence, one for each of our cognitive
skills, and here we can get lost with what we might find.
EDMONDS:
That the LTTT is a very difficult task to pass is obvious (we might
ourselves fail it during periods of illness or distraction), but
the source of its difficulty is not so obvious. In addition to the
difficulty of implementing problem-solving, inductive, deductive
and linguistic abilities,
Basto:
I do not think this is correct, since by no means we ever FAIL the
lifelong Test. This sounds like saying that one fails to "live" whilst
one is still living, even if sleep or stay idle most of the time. If
the lifelong Turing test is to be taken seriously than there is no way
a human will fail, since it is known apriori that humans are doted with
intelligence. Even if it "seems" that we are failing at some point, the
success is guaranteed since we are definitely able to carry on and use
other skills intelligence-demonstrating. To this respect, distraction
is not failure, nor is a period of illness; The test is the game of
life, and our "failures" will be small deviations and not at all
related to the catastrophic frame problem failure by the systems
created so far. Ours are merely intelligence limitations on specific
fieds and at certain depths of knowledge, but ALWAYS showing we have
some form or another, with some degree or another, of intelligence. The
Turing Test has no partial results, that is, it is not possible to say
one succeed x percent of the Test; rather it is a total test with a
total answer of pass or fail. Therefore it is not possible to have a
human "fail" the lifelong test during periods of illness or distraction
because on the overall lifelong test, the final result would be a
pass.
EDMONDS:
One also has to impart to a candidate a lot of background and
contextual information about being human including: a credible past
history, social conventions, a believable culture and even
commonality in the architecture of the self. A lot of this
information is not deducible from general principles but is
specific to our species and our societies.
Basto:
It is arguable that our history is dependence-inducing of intelligence
performance. The fact is our past history in the long term can be
reduced to a highly complex net of synaptic and dendrite associations
and representations encoded with an even higher degree of complexity in
our massively convolute brain. So if we can replicate this internal
structural information and reach the right state, why and where would
this not be deducible from general principles (the same physics laws we
are all subjected to)? Even if certain that those conventions, beliefs
and history are NOT part of the initial brain structure, we have to
admit that they are SOMEHOW encoded or represented in the brain
afterwards (ie, caused by epigenetic factors but nonetheless present in
the genetically-determined brain structure); so if we can replicate
this (not saying it easy) and reach the same correct state (whatever
that is), one has to conclude that although the development issue is of
concern, it is certainly NOT a constraint.
EDMONDS:
I wish to argue that it is far from certain that an artificial
intelligence (at least as validated by the LTTT) could be
deliberately constructed by us as a result of an intended plan.
There are two main arguments against this position that I wish to
deal with. Firstly, there is the contention that a strong
interpretation of the Church-Turing Hypothesis (CTH) to physical
processes would imply that it is theoretically possible that we
could be implemented as a Turing Machine (TM), and hence could be
imitated sufficiently to pass the TT. I will deal with this in
section 2. Secondly, that we could implement a TM with basic
learning processes and let it learn all the rest of the required
knowledge and abilities. I will argue that such an entity would not
longer be artificial in the section after (section 3). I will then
conclude with a plea to reconsider the social roots of intelligence
in section 4.
Basto:
This is exactly what the Turing hierarchy approaches. And I think it is
valid to say it could not be implemented as a Turing Machine since
these are physical formal representations of computation. I think a
strong CTH hypothesis would mean cognition is only computation and
therefore a computation only system could take in account all our
cognitive capacities. However, the Turing Test hierarchy extends the
Church-Turing hypothesis by including tests beyond computation only
dependence and therefore I think are out of the Church-Turing
hypothesis scope. For sure the TT Edmonds is concerned here is the T2
test in the hierarchy, but it was shown above that there are better and
more accurate Turing Test levels in the hierarchy to test models that
have symbolic AND dynamic features and are better armed to challenge
the quest for intelligence behavior replication. The Turing Machine as
I know it it is not adequate to be the right model to test, due to its
relation to strong AI principles.
EDMONDS:
The argument to show this is quite simple, it derives from the fact
that the definition of a TM is not constructive it is enough that
a TM could exist, there is no requirement that it be
constructable.
Basto:
In fact, the formal definition of a Turing Machine is not constructive.
These are problems researched in the field that today answers by the
name of Complexity theory. Complexity theory provides us with many
useful concepts about what can and what cannot be computable and things
alike, but it does not give an implementation recipe. So to assume
something like: let's suppose we have a tape reading machine, and this
tape is infinitely long"; if we than obtain valid theoretical results
it does not follow that we can give the jump from theory-to-reality
because there is no such thing as infinite tape in the real world and
this initial requirement can be a limitation to the implementation.
Whenever I read complexity theory results, I realize that some issues
are of no help when one wants to implement something. Let's take, for
instance, the notion of complexity in space: some results on space
complexity boundaries are doomed to reach a barrier in time, ie, we can
obtain a result that says that it is possible to get a space boundary
that is only attainable with an operation infinitely long time and by
so we stumble FIRST on the time barrier even if the implementation was
possible in space requirements. The same arguments can be found for
complexity in time barriers.
EDMONDS:
However, the TT (even the LTTT) is well suited to this purpose,
because it is a post-hoc test. It specifies nothing about the
construction process. One can therefore imagine fixing some of
the structure of an entity by design but developing the rest in
situ as the result of learning or evolutionary processes with
feedback in terms of the level of success at the test.
Basto:
After claiming that the Turing Test (the ones he means is perhaps t1)
is not suited to mirror human like intelligence by constructive and
philosophical arguments, Edmonds introduces his LTTT (similar to T2 in
the Turing hierarchy) and claims a system passing this test could
succeed in mirroring our intelligence. Edmonds proposes that this
system could be roughly built to learn and interact and so acquire
intelligence by means of evolution, learning, and "personal"
development. If I am right assuming Edmond's LTTT is the equivalent to
T2 in the Turing Test hierarchy, than we already have evidence that
such a system would fail on achieving its goals. But a T3 passing
system, ie, T2 grounded on T3 with a body and sensorimotor devices
could very well succeed in the task.
And that is when Edmonds meets Dennet's Cog.
EDMONDS:
At the end of the previous section, I raised the possibility that
an entity that embodied a mixture of designed elements and learning
in situ (using a source of randomness), might be employed to
produce an entity which could pass the LTTT. One can imagine the
device undergoing a training in the ways of humans using the
immersion method, i.e. left to learn and interact in the culture it
has to master. However, such a strategy, brings into question the
artificiality of the entity that results. Although we can say we
constructed the entity before it was put into training, this may be
far less true of the entity after training. To make this clearer,
imagine if we constructed
Basto:
Well observed. This LTTT here resembles T3 on the Turing hierarchy and
the source of randomness Edmonds refers to can be thought of the input
from the sensorimotor devices and the feedback from the environment. It
seems implicit in this passage that the system devised here has a body
of some source with which it interacts with the world. Edmonds claims
that this man-made system, after acquiring intelligence partially from
the environment and DUE to the environment, could no longer be called
artificial. Well this requires an accurate definition of artificial
that unfortunately Edmonds does not give (he was worried however, in
giving one for intelligence-as-he-sees-it), and therefore I think it is
not an issue of great importance. I do agree that intelligence is a
result of emergent factors at different evolution stages PLUS emergent
factors at different development phases. I do not think that to achieve
the state of intelligence that an individual has at a certain point in
time, one has to mimic the whole or parts of the evolutionary and
development process up to that moment in question. My reasons are that
certainly the moment in question is REPRESENTED or ENCODED somehow in
the individual and it just raises a degree of complexity in replicating
these internal representations and encodings.
EDMONDS:
We know that a significant proportion of human intelligence can be
attributed to the environment anyway (Neisser et al., 1996) and we
also know that a human that is not exposed to language at suitable
age would almost certainly not pass the LTTT (Lane, 1976).
Therefore the developmental process is at least critical to the
resulting manifestation of human intelligence. In this case, we
could not say that we had succeeded in creating a purely artificial
intelligence (we would be on even weaker ground if we had not
determined the
Basto:
The fact that intelligence is partially grounded on the environment
does not have anything to do with the artificial/natural question
here. We can solve Edmonds question by stating that the system would
have natural intelligence even if with an artificial infrastructure. I
mean natural as opposed to artificial. All sorts of arguments could be
used because it is not clear what is meant by artificial here: is it
synthetic made? man-made? non-organic only? Nevertheless, what might be
questioned here is also the name Artificial intelligence, because this
could be taken to mean an ARTIFICIAL entity capable of mirroring our
intelligence (thereby we assume one UNIQUE kind of intelligence), or a
reflection of our intelligence by means of an ARTIFICIAL REPLICATION of
it (and here we assume at least two kinds of intelligence -the
artificial indistinguishable and the natural real thing). The question
still depends on the rigorous definition of the term artificial that I
am not sure what might be, although I believe that if we have
artificial intelligence that is indistinguishable from our
intelligence, we might as well drop the "artificial" prefix.
EDMONDS:
The fact is, that if we evolved an entity to fit a niche (including
that defined by the TT or LTTT), then is a real sense that entitys
intelligence would be grounded in that niche and not as a result of
our design. It is not only trivial aspects that would be need to be
acquired in situ. Many crucial aspects of the entitys intelligence
would have to be derived from its situation if it was to have a
chance of passing the LTTT. For example: the meaning of its symbols
(Harnad, 1990), its social reality (Berger, 1966) and maybe even
its self (Burns and Engdahl, 1998) would need to have resulted from
such a social and environmental grounding. Given the flexibility of
the processes and its necessary ability to alter its own learning
abilities, it is not clear that any of the original structure would
survive. After all, we do not call our artifacts natural just
becuase they were initiated in a natural process (i.e. our brains),
so why vice versa?
Basto:
Throughout the article it is not clear what Turing Test in the
hierarchy Edmonds allures to. Since he is certainly aware of the symbol
grounding problem and the solutions for it, it can be argued that he
actually means an embodied version of T2 (ie, T3). If this is the case
nothing much remains to be said. It is however a fact that Edmonds does
not mention any need for an embodied system as a requirement for
succeeding as a model for mirroring our intelligent capacities,
although he does mention the need to INTERACT with the environment (not
necessarily a dynamic sensorial interaction is mentioned) and the use
of the environment as part of the artificial cognitive system (or so I
interpret him). It is true that to a certain extent, our genetic
infrastructure leaves enough space for self-change, self-regulation,
and learning capabilities, but it is not completely true that the
original structure disappears, as seems to be implied by the sentence
"...that any of the original structure would survive". The original
structure is in fact responsible and flexible enough to permit those
"additions" or improvements. The model created to pass the Turing
functional indistinguishability test can be made so that it considers
those issues of flexibility, extensibility, learning, perception and
self regulation.
EDMONDS:
All this points to a deeper consequence of the adoption of the TT
as the criterion for intelligence. The TT, as specified, is far
more than a way to short-cut philosophical quibbling, for it
implicates the social roots of the phenomena of intelligence. This
is perhaps not very surprising given that common usage of the term
intelligence typically occurs in a social context, indicating the
likely properties of certain interactions (as in the animal
trapping example above).
Basto:
Indeed this can be Turing's crucial insight: the conception of
intelligence on HOW intelligence does, and not drifting around on the
hard to explore regions of WHAT intelligence is. This way of
considering intelligent whatever physical system shows evidence of an
ability to interact socially in the same way as any of us humans, after
all this is the only way we can assert each others intelligence without
having to go inside each other, is the core of the Turing Test as
devised by Turing.
EDMONDS:
This is some distance from the usual conception of intelligence
that prevails in the field of Artificial Intelligence, which seems
overly influenced by the analogy of the machine (particularly the
Turing Machine). This is a much abstracted version of the original
social concept and, I would claim, a much impoverished one. Recent
work has started to indicate that the social situation might be as
important to the exhibition of intelligent behavior as the physical
situation (Edmonds and Dautenhahn, 1998). This interpretation of
intelligence is in contrast to others (e.g. French, 1989) who
criticize the TT on the grounds that it is only a test for human
intelligence. I am arguing that this humanity is an important
aspect of a test for meaningful intelligence, because this
intelligence is an aspect of and arises out of a social ability and
the society that concerns us in a human one.
Basto:
Edmonds creates a dichotomy between the Turing Test and the Turing
machine, showing the broad range and benefits of the former and the
constraints and limitations of the later. The Turing machine is just an
attempt to capture and formalize the notion of computation and
therefore it is only useful to capture the cognitive capabilities that
are computational. Edmonds claims that there is definitely more to
cognition than meets the Turing machine and claims that on the other
hand, the Turing Test indeed be a more powerful tool to assert this
extra-computation factors outside the realm of purely computational
systems (such as the Turing machine).
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST