Re: Turing Test

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Wed Feb 16 2000 - 20:46:41 GMT


http://cogprints.soton.ac.uk/abs/comp/199807017

On Thu, 10 Feb 2000, Paramanantham, Daran wrote:

> Paramanantham:
> This game is basically some sort of test, to find out whether a machine
> can deceive the interrogator and how well it does compared to a human.
> The outcome relies on the quality of questions asked, and how well a
> machine can learn from the questions.

Is it about deception? If a machine is designed that can do everything
I can, is that deception? Is it not possible that it really works the
way I do, inside?

> Paramanantham:
> It is important to understand the working of a machine, for future
> investigations, research, improvements etc. This enables designers to
> enhance their understanding of machines.

Yes, but Turing-testing is not just about ways to engineer better
machines; it's also about reverse-engineering how the brain works, to
produce a thinking mind.

> > Shaw:
> > This seems to be quite a convincing argument in favour of machines
> > eventually being able to think. Can't the brain be considered a
> > discrete-state machine: surely a neuron either fires or it doesn't
> > and it is this that determines the effect on the rest of the
> > brain.
>
> Paramanantham:
> For this to be feasible, machines will have to be given a set of
> training data initially. Hence, for a machine to 'think' it needs to
> undergo supervised learning. If this is the case, will machines ever
> be able to 'think' for themselves.

Is that a question or an answer?

> Paramanantham:
> If a machine can 'learn' from its environment (following rules or not),
> then It is said that they can think, do we (humans) follow rules and
> learn from them.

Hard to tell from this what you think is the case, and why...

> Paramanantham:
> This comes back to the idea of how well a machine can interact between
> Itself and the environment. Machines are known to be consistent, this
> make them Correct and sound. However, if a machine was to change it's
> thoughts every time it itself was in a different state, then this will
> lead to scientists arguing that if a machine was able to 'think', it is
> logically incorrect and unsound.

I couldn't follow this: People sometimes reason soundly, sometimes not;
same for machines. Both sometimes make mistakes. There seems to be no
principled basis for a distinction here.

> Paramanantham:
> For a machine to learn anything it first has to be given information,
> just like humans, from then on, its up to the machine on how they deal
> (process) with this information.

So is thinking the same as information processing? How? why? Is my SGI,
then, which can't even pass the TT, thinking, because it's processing
information?

> Paramanantham:
> The type of information ex. pleasure,
> and pain and other emotions will have to be provided.

How do you provide pleasure and pain to a computer? And are pleasure
and pain information?

> Paramanantham:
> If this is
> possible and a machine can learn from these and adapt, then it is
> possible of machines being able to think.

If we already knew a machine was experiencing pleasure, the question of
whether it was thinking or not would be a very minor one...

> Paramanantham:
> The main point to be made
> about 'can machines think', is by how much the designers want the
> machines to think? is it to increase performance or is it to imitate a
> human.

No, I think the main point is not about how much, but about how, and
whether, and why, and how would we be able to tell...

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT