Re: Turing Test

From: HARNAD, Stevan (
Date: Mon May 29 2000 - 02:30:08 BST

On Sun, 28 May 2000, Edwards, Dave wrote:

> We can already create software (implementation independant) programs
> that are 'clever', e.g. interactive t1's (I believe MicroSoft have a
> computer dog). Can we not advance these programs to get an learning
> algorithm for a interacting virtual human. We can already get proper
> sentence construction, it only needs a subject, for our program to
> comunicate (I know it's not that simple).

So you'd have a t1. So what?

> This would then be able to talk with us, and do things that we do (that
> it has been told to do by it's creators, or learned to do by copying
> us). Allowing it's outputs to be movements for a robotic body, we have
> a robot that can't think, but can act like us, can do things that we
> do, talk, eat, run, play chess, ect.

Whether virtual or embodied, this candidate sounds like t1, which is
nothing: a mere toy fragment of our capacity. What was the objective?
To produce a fragment of our capacity (say, chess playing, or
retrieving a newspaper from the door)? Done. Now what?

> We would have a robot that can interact with us. It can't think and
> it's symbols are not grounded. This might be able to pass T3, it
> certainly could pass T2. Even if it doesn't, surely this is good
> enough. OK, so it doesn't think, but the average bod-on-the-street
> would not be able to tell that it doesn't.

Not sure what you are driving at. If it's t1, that doesn't mean it can
pass T2 or T3. If it can pass T3, we're home (and you can go to the
parts of this discussion that consider the implications of that
successful feat of reverse egineering). If it can't pass T3, we're
nowhere. If it can only pass T2, we're back in Searle's room.

You can fool the average bod-on-the-street for a while -- but for a

    Harnad, S. (1992b) The Turing Test Is Not A Trick: Turing
    Indistinguishability Is A Scientific Criterion. SIGART Bulletin
    3(4) (October) 9 - 10.

> Is this not good enough? We can't tell if a thinking robot actually
> thinks. We can't tell if this 'simulated' thinking robot thinks either
> (except by asking it's creator).

You seem to be recapitulating things we've already considered in the
course. According to Turing, passing TT is the best we can ask for, if
we wish to maximize the probability that a candidate thinks.

(Asking a candidate's creator tells you nothing: If I had created a toy,
and the Nobel committee asked me whether it can think, I'd say it could:
wouldn't you?)

> I know the course is about cognition, and not fooling us, but could we
> tell that it is a t1 and not a T3?

I don't get your point. Are you saying that a toy robot that could do a
few things we can do, but not the rest, could pass T3? But by saying it
can only do a bit, you are already saying it fails T3, whether or not it
can fool some of us for a while...


This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:29 GMT