Re: Turing Test

From: Edwards, Dave (dpe197@soton.ac.uk)
Date: Sun May 28 2000 - 12:48:09 BST


We can already create software (implementation independant) programs that
are 'clever', e.g. interactive t1's (I believe MicroSoft have a computer dog).
Can we not advance these programs to get an learning algorithm for a
interacting virtual human. We can already get proper sentence construction, it
only needs a subject, for our program to comunicate (I know it's not that
simple).

This would then be able to talk with us, and do things that we do (that it has
been told to do by it's creators, or learned to do by copying us). Allowing
it's outputs to be movements for a robotic body, we have a robot that can't
think, but can act like us, can do things that we do, talk, eat, run, play
chess, ect.

We would have a robot that can interact with us. It can't think and it's
symbols are not grounded. This might be able to pass T3, it certainly could
pass T2. Even if it doesn't, surely this is good enough. OK, so it doesn't
think, but the average bod-on-the-street would not be able to tell that it
doesn't.

Is this not good enough? We can't tell if a thinking robot actually thinks. We
can't tell if this 'simulated' thinking robot thinks either (except by asking
it's creator).

I know the course is about cognition, and not fooling us, but could we tell
that it is a t1 and not a T3?

Edwards, Dave



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:29 GMT