Re: Turing Test

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Sun May 28 2000 - 11:21:58 BST


On Sat, 27 May 2000 Grady, James wrote:

> > HARNAD
> >It only means T3 has to be able to DO everything a human can do (that's
> >the TT). So the symbols need to be grounded, but there are no rules
> >about HOW they should be grounded (nor about their actual history).
>
> Grady:
> Isn't the T2 test about engaging with a human in a turing
> indistinguishable
> way from another human via letter or e-mail for a sufficiently long period
> of time. So the T3 test must also involve the robot engaging with another
> human, as well as doing everything a human can do. If a person can tell
> the robot apart from a human through being with it for a sufficiently
> long period of time, it must fail. (How else can you test the robot)

Everything you refer to here is correct. But in your origial question
you were talking, not about what the T3 could DO, but about HOW it did
it (internally). That is irrelevant to the TT (except if it involves
cheating).

You suggested that somehow symbols could be grounded in the "wrong" way
-- that, for example, for a robot who had never eaten an apple, "apple"
would be wrongly grounded or even ungrounded; and that for a robot that
could not eat at all, perhaps all things that cause "nourishment" in us
would be ungrounded.

But here there are two possible things you might be saying:

(1) That a robot that could not eat would not know what it "feels like"
to eat, hence wouldn't really know what "apple" and "food" meant. Here I
would say that you'll never know what anyone FEELS, so focus only on
what it DOES. If its doings cannot be distinguished from ours, it passes
T3. (Do you olny believe someone knows what apples are if you have
actually seen them eating apples? If our informal turing-testing of one
another were anywhere near that rigorous, none of us would pass T3
either!)

(2) That a robot that could not eat (which, after all, is a form of
DOING too), could not pass T3 (it could not ground all the symbols that
depend on the sensorimotor capcity to eat).

The reply to (2) is that you might be right, but then you have to give
us some idea (as it is indeed possible to give with T2 vs. T3) as to
why the T4 capacity to eat is essential to passing T3. From what you
said, I don't see it.

Would a human being who, because of a digestive disorder, was on an
intravenous drip from cradle to grave not be TT-indistinguishable from
the rest of us? Do you doubt that their language and thought would be
grounded? They certainly would not have a first-hand experience of what
it's like to eat -- perhaps like a born-blind man who does not
experience seeing -- but in that respect would they not be like those
of us who had never eaten, say, a kumquat?

Remember that symbols need to be grounded in sensorimotor experience,
but not ALL of them have to be grounded directly: Most can be grounded
in recombinations of the directly grounded ones. (Recall the
dictionary that defines all the rest of English from just 2000 defining
words.)

In summary: Distinguish questions about what would be required to give
a system the capacity to successfully pass T3 from questions about what
T3 is insufficient to test.

> >HARNAD
> >One human hasn't the same weaknesses and needs as another. All T3 needs
> >is generic capacities, indistinguishable from one of us.
>
> Grady:
> Surely this is not enough. A T3 robot must not only have the same
> capacities
> as a human but also the same generic INCAPACITIES. These come largely
> from our weaknesses and vulnerabilities. No non-T4 (or perhaps T5) robot
> is going to have these. A T3 robot can't ground human incapacities. A
> person spending time with such a robot is going to pick up on this and so
> fail the robot. Therefore a T4 robot (or above) is required to pass T3.

Well, I wouldn't want to be too exacting about INcapacities: There are
always enough real people way out on the curve for height, weight,
strength, memory, speed, etc. to make me hesitate to dismiss someone
who outperforms everyone as failing T3.

Try to think specifically about what you are actually imagining that the
candidate could or couldn't do, and ask yourself whether if a real
person could/couldn't do that, you would doubt that they were a person.
(This is just to prime your intuitions: obviously if you knew someone
was a real person, not even a coma from birth would make you doubt it;
but we are talking about engineering and performance testing here.)

> Grady:
> However there is no reason why this T3 robot should not be able to think.
> Perhaps we need to change the Turing test a little to recognise this. (No
> disrespect to Turing, if it wasn't for him we wouldn't be having this
> discussion)
>
> HARNAD:
> >I couldn't follow that. Why is 1 week special? And convince them it
> >could think (= ?) how? Via T2 (language alone) or via T3? We are back
> >where we started.
>
> Grady:
> Essentially we want to CREATE a robot that can think. The idea being that
> if we created it we would know how it works and so understand thinking.
> Therefore the point of the Turing Test should not be to create a robot
> that
> can think as a human but simply a robot that can THINK

Correct. But the gist of Turing's first and only insight into this
question (about what can think, and what is thinking) is that you can
only tell whether a system can think by what it can DO. And that's the
TT. If it can DO (T3-scale), then it can THINK. ("Egit, ergo Cogitat."
It does, therefore it thinks.)

You seem to be talking here as if there were some DIRECT way to get at
and confirm thinking, other than the doing-T3. There isn't, except in
our own case (and that's Descartes' standard "Cogito Ergo Sum" -- "I
think therefor I am [thinking]" -- but that hardly helps us in our
reverse-engineering of thinking).

Here is some advice to all of you in preparing for the exam: Think of
this problem as a real, practical problem for the successful
reverse-engineering of thinking; don't get absorbed in its sci-fi or
philosophical aspects that have no bearing on that specific scientific
goal.

> Grady:
> We know that a dolphin is intelligent, not because we are dolphins or
> because dolphins can do everything a human can do, but because we have
> spent time interacting with them and putting them through tests. We have
> concluded that they can think and are intelligent.
> (This is our Dolphin Test, D3).

As I explained, there is no D3 (or any animal T3) because, unlike with
humans, we neither have (1) a reasonably full inventory of what it is
that other species can/can't DO (whereas we do have one for our own
species); nor, even if we did have (1) , do we have (2) the intuitive
"mind-reading" capacities with other species that we have with our own.
(And that's why there are some of us -- not me -- who actually have
doubts about whether other species have minds, feel, think, are
intelligent, etc.).

So, first, an animal T-test for a real animal would tell us nothing
anyway. The question here is not about whether real live animals have
minds, but about whether man-made candidates do.

Second, for a man-made, reverse-engineering candidate for
passing an animal T3, we'd have the two limitations described above
(don't know if it can really do everything the animal can do, and can't
tell whether it's T3-distinguishable from the way the animal would do
it).

> Grady:
> Similar to our Dolphin test we could have a Robot test.

James, there is no point talking about a "dolphin test": A dolphin test
for what? We didn't build real dolphins. They resemble people in some
ways, but what are we supposed to make of that?

Nothing is AT ISSUE (in terms of reverse engineering, which is what this
course is about) when we ponder whether or not a dolphin has a mind.
Turing and Turing testing does not come into it. They only come up when
we speak of robots. And dolphin-robots cannot be adequately T-tested for
the two reasons I gave you.

(Having said that, animal-robots will certainly be approximate
way-stations on the robotic road toward the human T3.)

> Grady:
> If we were to spend time with our robot, interacting with it, testing it,
> eventually we would come to a conclusion as to weather or not it was
> able to think. We would RECOGNISE intelligence in the robot. It
> would be self-evident, the same as it is in the dolphin.

It is not "self-evident" in any case but one's own personal case
(Descartes). It is evident (not self-evident) in other people than
oneself, who are T3-indistiguishable from ourselves in what they can
do. The similarity of dolphins to us also makes it evident (to me) that
they have minds, but that is neither here nor there; it is man-made
candidates that are on trial with the T3, and passing means being
indistinguishable from the non-man-made candidates -- human, alas, not
dolphin.

> Grady:
> As we had created the robot which could think we would understand
> what thinking and intelligence is. Our goal would be accomplished.

You need to give this a bit more thought, to sort out thinking/doing and
exactly what the TT is testing, how, and why. Focus, as I said, on
reverse-engineering intelligence (= thinking, having a mind,cognition,
etc.) not merely on our intuitions about who/what might be thinking.

> Sorry if I am going over the same old ground but it is useful for me to
> clear up these questions before the exam.

Read the prior skywriting on these questions, plus the skyreadings. This
ground has been covered. But ask me again if anything new comes up, or
there is something you don't understand.

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:29 GMT