Re: Turing Test

From: Terry, Mark (
Date: Thu Feb 10 2000 - 00:42:16 GMT

Date: Thu, 10 Feb 2000 00:42:16 +0000 (GMT)
From: Terry, Mark <>

> I propose to consider the question, "Can machines
> think?".

Instead of answering this question directly, Turing offers another
question, the 'Imitation game'. This consists of three players. A
machine (A), a man (B) and another man (C). C only knows A and B by the
tags X and Y. C's task is to correctly identify X as A and Y as B (or
visa versa). The machine is of course attempting to mimic a man. Player
B can simply answer with the truth. The question is: will C correctly
identify A and B as often as if A was a woman (pretending to be a man)
instead of a computer.

> 2. Critique of the New Problem

> The game may perhaps be criticised on the ground that
> the odds are weighted too heavily against the machine.
> If the man were to try and pretend to be the machine he
> would clearly make a very poor showing.

> [If] a machine can be constructed to play the
> imitation game satisfactorily, we need not be troubled
> by this objection.

It does seem self-centred of us to define thinking on purely human
terms. Does an animal or a baby think? Is this less valid than adult
human thought? A definite point here; is imitation thinking? Is a
child who copies a parent thinking, or merely behaving? Are these the
same thing? Thought surely cannot be considered as imitation. If
thought is imitation, what am I imitating as I write this?

In case you were worried about what a "machine" in the game is:

> We only permit digital computers to take part in our game.

Turing was well aware that computers in 1950 could not play the game
well. He was concerned with whether or not any machine we can theorise
could. Today we may argue that computers are sufficiently powerful,
but we can always envisage more speed (in terms of computations per
second), better programs (a set of instructions for the computer to act
upon), etc. Because of this, if no one ever produces a computer which
can play the game well, the question still holds. This makes it a very
hard question to answer "no" to.

> (1) The Theological Objection

> Thinking is a function of man's immortal soul. God has
> given an immortal soul to every man and woman, but not
> to any other animal or to machines. Hence no animal or
> machine can think.

It is always hard to refute theological arguments, but Turing argues
that this is underestimating God's ability. Historically, religion and
science have not agreed on many things. Turing sites Copernicus as a
good example (The Church branded the notion of the Earth not being the
centre of the universe heresy.) Rather more convincing is

> (3) The Mathematical Objection

> The questions that we know the machines must fail on
> are of this type,
> "Consider the machine specified as follows. . . . Will
> this machine ever answer 'Yes' to any question?" The
> dots are to be replaced by a description of some machine.
> When the machine described bears a certain
> comparatively simple relation to the machine which is
> under interrogation, it can be shown that the answer is
> either wrong or not forthcoming. This is the
> mathematical result: it is argued that it proves a
> disability of machines to which the human intellect is
> not subject.

There is no denying the validity of this argument. Turing points out
that Humans make mistakes as well, but this seems an unsatisfactory
response. If a human can give an answer to this problem satisfactorily,
then any computer will fail the test. I asked a human this question.
The answer was "not forthcoming". Why should machines be able to
understand problems a human gets tied up in?

> (4) The Argument from Consciousness

> This argument is very, well expressed in Professor
> Jefferson's Lister Oration for 1949, from which I quote.
> "Not until a machine can write a sonnet or compose a
> concerto because of thoughts and emotions felt, and not
> by the chance fall of symbols, could we agree that
> machine equals brain-that is, not only write it but know
> that it had written it. No mechanism could feel (and not
> merely artificially signal, an easy contrivance)
> pleasure at its successes, grief when its valves fuse,
> be warmed by flattery, be made miserable by its
> mistakes, be charmed by sex, be angry or depressed when
> it cannot get what it wants."

Turing argues that the extreme view of this argument is

> the only way to know that a man thinks is to be that particular man.

Turing believed that this problem does not have to be debated to answer
his question. The fact that we don't understand something doesn't mean
we can't define it and recognise an instance of it by the instances
characteristics. I don't know how an aeroplane works, but I can still
identify one. This point leads to interesting questions such as: is
thinking a requisite of consciousness ?

Turing then treats a similar argument, namely: can a computer enjoy
strawberries ? If notions of pleasure and pain are, as many claim,
'all in the mind' then if a machine believes that it enjoys
strawberries, doesn't it therefore do so ?

The next argument is that

> a machine cannot be the subject of its own thought

We first have to say that a machine has thoughts to decide if it can be
a subject of them. However, Turing states that a computer executing a
program is "thinking" about this program. This definition seems to
halt the entire question of machines thinking. If we accept this, all
machines which execute some instruction are thinking about it. I cannot
accept this definition. However, programs can be written which modify
themselves/other programs/the machine. In this sense, then, a computer
can indeed be the subject of its own thought.

> The criticism that a machine cannot have much diversity
> of behaviour is just a way of saying that it cannot have
> much storage capacity.

This ends that slightly naive argument.

Talking of Babbage's Analytical Engine (widely regarded as the first
model for a computer) Lady Lovelace wrote:

> "The Analytical Engine has no pretensions
> to originate anything. It can do whatever we know how to
> order it to perform" (her italics).

This argument still rings true to a large extent. However, computers
have successfully used inductive reasoning to infer now theorems
(statements we believe to be true) from axioms (statements that are
true). Thus, whilst the computer is 'only' following its instructions,
it is formulating new ideas. On the concept of "new ideas", Turing
points out

> Who can be certain that "original work"
> that he has done was not simply the growth of the seed
> planted in him by teaching, or the effect of following
> well-known general principles.

So can anyone really claim that a new idea is not derived from present
knowledge and rules, just as in an inference engine? Also, how many
people actually come up with one single genuinely new idea in their

Turing puts much faith in learning machines in the future. Learning
machines do of course exist (neural networks, for instance). Turing
considers problems such as playing chess as ways to develop new
computing ideas. IBM's Deep Blue is an example of a computer who can
play chess better than any human. This is of course a very abstract
task. If every task a human can viably perform can be modelled in the
same way, would this machine play the 'imitation game' as well as a
man? If it could, would this machine actually be thinking? This type
of question is exactly why Turing designed his test. A man believes
that the machine is thinking. If the machine isn't thinking, its coming
pretty close.

Terry, Mark <>

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT