Re: Turing: Computing Machinery and Intelligence

From: Bell Simon (
Date: Thu May 24 2001 - 17:33:14 BST

Turing, A. M. (1950) Computing Machinery and Intelligence.
Mind 49:433-460.

>This paper is Alan Turing's attempt at trying to solve the question of
>whether computers are able to think for themselves. If it could be proved
>that this was possible, then it would probably have been easy to deduce
>therefore that they were also intelligent. At the time, and until now, the
>true essence of what is captured by intelligence is still unidentifiable.
>It is easier to compare the relative intelligence of two or more
>individuals than it is to specify what is intelligent and why it is or can
>be labelled intelligent. In order to commence his quest for the answer to
>the proposed question, Mr Turing first shows that the question in itself
>was much too ambiguous and as a result, probably wouldn't have a chance to
>be answered fairly.

>>If the meaning of the words "machine" and "think" are to be found by
>>examining how they are commonly used it is difficult to escape the
>>conclusion that the meaning and the answer to the question, "Can machines
>>think?" is to be sought in a statistical survey such as a Gallup poll.

What is meant by something thinking by itself? Is is possessing intentionality
as Searle suggested in his paper 'Minds brains and programs', and if this
is so is an answer or just a redefinition of the problem?
If the true essence of intelligence is unidentifiable, is this because
our own sense of self is false? Perhaps we think so highly of our own
consciousness that we do not consider a mechanical model strong enough
to give rise to an explaination, for this would introduce either
determinism or nondeterminism, both of which being contrary to our sense
of 'free will'.
If we are to compare relative intelligence then a domain is required over
which to do it. Due to the otherminds barrier, behaviourism is the only
one available, but is it adequate? Certainly if two entities behave the same
then we have no grounds to state that one is intelligent and the other is
not. However if they behave differently, then do we have any grounds to
state nonintelligence? Only if we define intelligence in terms of
behaviour that a human exhibits under scrutiny.

>Once it has been established that the question is inappropriate, Alan
>Turing proceeds to restructure the question into a form that he feels is
>more defined and as a result easier to answer. Turing's revised question
>is described in the form of a game played by three human participants.

>>The new form of the problem can be described in terms of a game which we
>>call the 'imitation game." It is played with three people, a man (A), a
>>woman (B), and an interrogator (C) who may be of either sex. The
>>interrogator stays in a room apart front the other two. The object of the
>>game for the interrogator is to determine which of the other two is the
>>man and which is the woman. He knows them by labels X and Y, and at the
>>end of the game he says either "X is A and Y is B" or "X is B and Y is A

If we can only define intelligence in relation of behaviourism, then we
are confined to only being able to conduct negative experiments; not to
prove intelligence but to disprove it.

>To avoid the problem of physical, vocal and other obvious differences
>between the sexes, the candidates communication with the interrogator
>is typewritten. Turing's game requires the male candidate to misguide the
>interrogator's identification while the female candidate's job is to aid
>the interrogator by telling the truth. This is where the question of the
>thinking computer comes in because Turing then considers what will happen
>if a computer replaced the candidate A in the game.

>>We now ask the question, "What will happen when a machine takes the part
>>of A in this game?" Will the interrogator decide wrongly as often when the
>>game is played like this as he does when the game is played between a man
>>and a woman?

>Turing has described this as a game, and that implies that the object is
>to win or avoid losing. The interrogator wins if the man and woman
>candidates can be correctly identified after interrogation otherwise,
>they lose. Turing's description of the game could leave readers of the
>opinion that the objective is to try and fool someone. Moreover, Turing
>refers to the frequency of correct or wrong identifications. Does the
>frequency of correct identifications actually specify intelligence?
>The original question is to find out if computers are intelligent, not to
>find out how much more or less intelligent they are than man.
>So a computer replaces the male candidate mid-game, Turing's replacement
>question is how much more often the interrogator will make incorrect
>classifications. I don't think that this is the question that Turing meant
>to ask. Is it relevant? What if the male candidate was replaced by
>another male candidate instead of a computer? The interrogator will
>surely not arrive at the same number of correct and wrong identifications
>in both cases. What does this mean? Does it mean that one man is
>intelligent and the other isn't? Or is one more intelligent than the
>other? The question that maybe should have been asked instead is whether
>the interrogator would notice that there had been a substitution.
>Alan Turing then goes on to expansiate on the precise definition of
>machines and in so doing, excludes all non-digital computers from the

The description of the Turing Test as a game was unfortunate as this
implies just two outcomes are possible. It is not that I am implying
that there are different levels of intelligence within a range, but
that matching a complex behaviour to an abstract notion such as
intelligence is not going to result in a definite catagory unless it
was aggressively defined, which it certainly is not.

>>Following this suggestion we only permit digital computers to take part
>>in our game.

>I do not disagree with him on this viewpoint but I'm not sure i understand
>his manner of reasoning for the exclusion. He declares that a human being
>created entirely from a single cell would probably not be categorised as a
>thinking machine and i am inclined to agree because even though the new
>human has not been born in the usual manner, he or she more than likely
>will be as human as one can be. I however do not understand what that has
>to do with insisting that all the engineers be of the same sex.

It serves no purpose to permit only digital computers to take part in the
test. It is true that as digital computers are Turing-complient they can
compute anything that is calculable, but if an analogue machine were to pass
the Turing Test it's intelligence would not be denied on account of it being
analogue and not digital.

>>It is difficult to frame the definitions so as to satisfy these three
>>conditions. One might for instance insist that the team of engineers
>>should be all of one sex, but this would not really be satisfactory, for
>>it is probably possible to rear a complete individual from a single cell
>>of the skin (say) of a man.

>Turing, having allowed only digital computers to be included in this
>experiment, then goes on to expansiate and define a digital computer. His
>definition makes a comparison to a human computer.

>>The idea behind digital computers may be explained by saying that these
>>machines are intended to carry out any operations which could be done by a
>>human computer. The human computer is supposed to be following fixed
>>rules; he has no authority to deviate from them in any detail.

>According to Mr Menabrea's paper on the Analytical Engine, Charles Babbage
>identified two distinct sides to the solution of a problem when performed
>by humans: a mathematical and an intellectual one. It was the mathematical
>aspect that Mr Babbage tried to capture in the description of his machine.
>Turing rightly identifies that such a definition of the digital computer as
>intending to carry out any operation could be potentially hazardous. But he
>claims maybe a bit hastily that the digital computer mimics the actions of
>a human computer very closely. It is probably more appropriate to say that
>they perform specifically calculations very similarly to the human

>>I believe that in about fifty years' time it will be possible, to
>>programme computers, with a storage capacity of about 109, to make them
>>play the imitation game so well that an average interrogator will not have
>>more than 70 per cent chance of making the right identification after five
>>minutes of questioning.

>The above statement alone is enough to see why there has been so much room
>for argument on Turing's theory. He uses words like 'play' and 'imitation
>game'. This is saying that in his future, he saw a computer being developed
>that could deceive an interrogator a proportion of the time that it was
>tested. If a machine can think, it can think. We never say an intelligent
>person is intelligent half the time so why does Turing try to quantify the
>thinking of a machine? Why does he specify that the game is to be played
>with an average interrogator? If the interrogator is being fooled so many
>times in the test period, is it testing his intelligence as well and
>does his intelligence drop the less correct his identifications? Why did
>Turing specify a time limit on the game. Surely this is not an endurance
>game. The aim is to find out if machines can think not to see how long they
>can think for.
>Turing next attempts to consider and defend himself against all possible
>opinions that oppose his. In doing so, he commences with the theological
>objection that suggests that God has only bestowed the ability to think
>through an immortal soul on human beings. This leads to the conclusion
>that animals and machines cannot think.

The fact that Turing attempts to quantify the values that he believes would
indicate the presence of intelligence detracts nothing from the qualities of
the test itself. Turing belived that five minutes were suffice to assertain
whether the machine could perform as well as a human within the domain of
the test. To take the test for a longer period would, supposedly in Turing's
view, not alter the results. Turing is not attempting to "quantify the
thinking of a machine" but instead making a sensible suggestion regarding
the length of test required to obtain a result.
The wording of "average inspector" is not making a judgement of the
inspector's intelligence but instead just means that if the test were to be
performed with more than one inspector, the average score should be taken to
counter the effects of over-zelous or over-satisfied people.

>>It appears to me that the argument quoted above implies a serious
>>restriction of the omnipotence of the Almighty. It is admitted that there
>>are certain things that He cannot do such as making one equal to two, but
>>should we not believe that He has freedom to confer a soul on an elephant
>>if He sees fit?

>Turing has a point. Given that God is as powerful as he is meant to be,
>surely he can give a soul to anything he deems fit and it is impossible
>to know who has a soul and who doesn't as displayed by the other mind's
>problem. It is just as impossible to know that another person has a soul.
>The only reasoning to suggest this is because we are all similar and of the
>same species. Still siding with Turing against the theologists, if God
>only gave humans an immortal soul, how can animals sense danger, be taught
>things by learning what their owners expect of them? If the bible is taken
>as the Gospel truth, why did Noah have to save the animals as well as his
>family during the flood?
>Taking the theological point of view for a minute, it does say that human
>beings were created in God's own image and likeness and no such reference
>to any other creature was made so it can probably be assumed therefore that
>we are the only ones with souls.

I think that tackling a theological objection adds nothing to the strength
of Turing's arguement, and only leaves it open to attack. Perhaps time
would be better spent exploring the possibilities of the physical world,
and not theorising about the supernatural. The ambiguities of religous
scripture and opinion are not a sound basis upon which to develop a

>>There are a number of results of mathematical logic which can be used to
>>show that there are limitations to the powers of discrete-state machines.
>>The short answer to this argument is that although it is established that
>>there are limitations to the Powers If any particular machine, it has only
>>been stated, without any sort of proof, that no such limitations apply to
>>the human intellect.

>Once again Turing's point is valid. It is true that there are these bits
>of logic that highlight limitations to computation, but it is also true
>that the human intellect has not been able to show that it is not
>susceptible to the same limitations. As Turing says, there are many
>questions that human beings get asked that they cannot answer or get wrong.
>The type of questions that these machines are supposed to fail on are
>for instance those that require decisive answers about topics that have
>no definitive answer.
>By the same token, if a human was asked if a similar human would ever
>answer 'Yes' to any question, there would definitely be a long pondering
>time. Back to the other mind's problem, the individual being asked the
>question is not the individual in the question even though they are similar
>and so it is not really possible to know how the latter would answer the
>question in order for the former to answer correctly. Confused? That's what
>i thought. This is the limitation being questioned and it can be seen that
>it affects the human as well as the machine.
>Turing's next defence is against the argument from those who believe in a

Advocates of results such as Godel's believe that the limit upon computation
is not present in humans, and so the two cannot be linked. This, as Turing
states, has not been shown as true. It seriously lacks conviction and
credibility given the magnitude of research put in to show otherwise.

>>This argument is very, well expressed in Professor Jefferson's Lister
>>Oration for 1949, from which I quote. "Not until a machine can write a
>>sonnet or compose a concerto because of thoughts and emotions felt, and
>>not by the chance fall of symbols, could we agree that machine equals
>>brain-that is, not only write it but know that it had written it

>Here as before, one can only prove that the machine knows it has written
>the sonnet by actually becoming the machine in order to get round the
>other mind's problem. Searle overcame this barrier in his chinese room
>argument which is the pen pal's version of the Turing test. He became the
>machine communicating in chinese with the interrogator by implementing
>algorithms alone. He showed by so doing that he could communicate endlessly
>in chinese without actually understanding a word of the language. This goes
>a long way to show that even if a machine can write a sonnet, it can do it
>without necessarily knowing what it had done. This suggests that the
>consciousness people have a point.
>Turing's next opposition is from the various things that are suggested that
>computers cannot do that humans can.

Searle did not overcome the otherminds barrier. He could make no comment upon
whether the algorithms that he manipulated displayed intelligence. The fact
that it was Searle that performed the algorithmic manipulation is
irrelevant, and his lack of understanding Chinese demonstrates the hardware
abstraction that is fundamental to artifical intelligence. I would not
claim that the cells in my brain possessed inteligence at a chemical level.
Intelligence is a behaviour that something exhibits, not a physical
quantity. Searle made the mistake of comparing a behaviour that is exhibited
by a collection of objects with the principle upon which they individually

>>These arguments take the form, "I grant you that you can make machines do
>>all the things you have mentioned but you will never be able to make one
>>to do X." Numerous features X are suggested in this connexion I offer a
>>Be kind, resourceful, beautiful, friendly, have initiative, have a sense
>>of humour, tell right from wrong, make mistakes, fall in love, enjoy
>>strawberries and cream, make some one fall in love with it, learn from
>>experience, use words properly, be the subject of its own thought, have as
>>much diversity of behaviour as a man, do something really new.

>All of these deformities, are similar to the argument concerning emotion,
>consciousness. Turing identifies this and provides some argument in his
>favour to combat these. For my own addition to his cause, What does beauty
>have to do with intellect? There are lots of examples in the world of
>beautiful people who are not reknown for their intelligence and by the same
>token, there are intelligent people who would not be categorized as
>beautiful. By the way, what happened to beauty being in the eyes of the
>On the matter of learning from experience, there are lots of machines that
>have self modifying code. There are neural networks that show learning by
>following some training rule. An example is the learning of simple logic
>functions by the single layer percptron using the perceptron training rule.
>Turing agrees when he refers to the machine being the subject of its own

When Turing uses the word beautiful I think that he is referring to being
kind, generous etc and not of a physical attribute.
I do not see why there is an issue with emotional states and computers,
for they are only behaviours that are exhibited. While it is true that
emotional states in humans and chemicals that alter neurological interactions
are closely intertwined, and so implementation, even through a neural
network that resembled the structure of a brain, would be difficult. This
stipulation however has no relevance to computation being the basis of
intelligence, and so whether machines can display intelligence.

>>In this sort of sense a machine undoubtedly can be its own subject matter.
>>It may be used to help in making up its own programmes, or to predict the
>>effect of alterations in its own structure. By observing the results of
>>its own behaviour it can modify its own programmes so as to achieve some
>>purpose more effectively.

>With regards to doing something new, everytime a computer does something
>for the first time, it is doing something new. Also with new software
>being implemented on it, surely it is doing something new. If the argument
>is that humans do not require new software to do something new, then what
>is it when they are taught things? Surely it is in effect new data or
>rules being presented to them. Lady lovelace's objection is similar to this
>and is also defended against by Turing. It has already been decided that
>computers can learn from experiences, surely if this is the case, the next
>time a similar event is experienced the outcomes of the old experience can
>be applied to arrive at a new outcome. In the case of the neural network,
>the net starts off unknowledgeable and at the end of its training, it has
>learnt how to perform the AND function and can be tested to prove it.

>>It is true that a discrete-state machine must be different from a
>>continuous machine. But if we adhere to the conditions of the imitation
>>game, the interrogator will not be able to take any advantage of this
>>It would not be possible for a digital computer to predict exactly what
>>answers the differential analyser would give to a problem, but it would be
>>quite capable of giving the right sort of answer. For instance, if asked
>>to give the value of (actually about 3.1416) it would be reasonable to
>>choose at random between the values 3.12, 3.13, 3.14, 3.15, 3.16 with the
>>probabilities of 0.05, 0.15, 0.55, 0.19, 0.06 (say). Under these
>>circumstances it would be very difficult for the interrogator to
>>distinguish the differential analyser from the digital computer.

>The entire basis for the imitation game is that it is carried out with a
>digital or discrete-state machine. It is quite unsurprising therefore that
>the next argument to be contended is one that highlights that the computer
>being modelled (the human computer) is not a discrete-state machine. As
>Turing states, this fact is irrelevant if the game is played as he
>described. He further shows that even though the a continuous machine
>cannot be mimicked by a digital computer, they can still both yield the
>same sort of answer and therefore there is no point trying to differentiate
>between the two.

>>A more specific argument based on ESP might run as follows: "Let us play
>>the imitation game, using as witnesses a man who is good as a telepathic
>>receiver, and a digital computer. The interrogator can ask such questions
>>as 'What suit does the card in my right hand belong to?' The man by
>>telepathy or clairvoyance gives the right answer 130 times out of 400
>>cards. The machine can only guess at random, and perhaps gets 104 right,
>>so the interrogator makes the right identification." There is an
>>interesting possibility which opens here. Suppose the digital computer
>>contains a random number generator.
>>On the other hand, he might be able to guess right without any
>>questioning, by clairvoyance. With ESP anything may happen.

>Turing suggests that the computer's random number generator will be subject
>to the psychokinetic powers of the interrogator. Does the computer need
>to have a random number generator? If it just makes a random guess and is
>perceptive to the psychokinetic powers, surely it could still make more
>correct guesses. In considering the flip side, is Turing referring to the
>interrogator or the male candidate making a correct guess? If it turns out
>that computers are susceptible to Extrasensory Perception, then in
>agreement with Turing, it is necessary to modify the setup for the
>imitation game.
>Finally, Turing concludes his paper by concerning himself with the topic of
>learning machines. He is trying to produce his own arguments to support his
>theories after going so far as to pick out the holes in his opposition. He
>starts this by focusing on Lady Lovelace's objection.

>>Another simile would be an atomic pile of less than critical size: an
>>injected idea is to correspond to a neutron entering the pile from
>>without. Each such neutron will cause a certain disturbance which
>>eventually dies away. If, however, the size of the pile is sufficiently
>>increased, tire disturbance caused by such an incoming neutron will very
>>likely go on and on increasing until the whole pile is destroyed. Is there
>>a corresponding phenomenon for minds, and is there one for machines?

>I agree that given an atomic pile, a neutrons interferance would cause the
>described disturbance until it dies down. How does Turing then go on to say
>that if the pile was large enough, such a disturbance would get greater? Is
>that possible? Take for example a pebble dropped into a puddle, the effects
>are obvious until as expected the ripples die down. If that same pebble was
>dropped in the ocean, it doesn't have this huge impact that Turing's
>statement seems to be conveying. If anything, i would say that the effects
>will decrease quicker due to the considerable increase in the number of
>atoms to provide damping. This analogy therefore does not seem to me to
>support Turing's following argument with reference to the subcritical and
>supercritical mind.

Turing's analogy with his atomic pile is true enough with his description
oif what occurs when neutrons are fired into them, as it refers to the
critical mass required for an atomic explosion. Turing is showing an
example of where the magnitude of elements involved directly affects the
behaviour that it exhibits. This is an inticing prospect as animals with
larger brains display more signs of intelligence and reasoning, and more
complex neural networks can classify a greater number of different
This also demonstrates the fundamental problem of using analogies in
discussions and deriving conclusions from their behaviour. The atomic
pile and pebble in the ocean seem equivilant, but abstract away
totally different aspects of the issue.

>>Adhering to this analogy we ask, "Can a machine be made to be

>Going along with his sub/super critical mind theory, it is obviously an
>appropriate question to ask whether machines can be made to be subcritical,
>but surely it would be better to see if they can be subcritical first. I
>accept that if they can be made to be supercritical then the question
>of subcriticality becomes irrelevant but by the same token, if they cannot
>be made to be subcritical first, then how are they going to be made

I believe that Turing meant that machines already are subcritical; they do
not exhibit the behaviours that a more complex machine may: Intelligence.
Turing was proposing to classify machines into two catagories, not with
a third 'other' state.

>>Our problem then is to find out how to programme these machines to play
>>the game.

>Once again, Turing has used his description of the test as a game. I think
>that Turing had the right idea in many ways but if controversy and
>misinterpretation have arisen in the minds of the population as a result of
>his paper, then it is mainly down to the manner in which Turing set about
>explaining things.
>Turing's final thoughts were to entertain the idea that following the human
>development, we start as children and through learning from experiences, we
>develop into adults. The child mind is assumed with reason to be slightly
>less complicated than the adult one. Intelligence, whatever that is, can
>nevertheless be seen in children as they adapt and reform ideas through
>their experiences until they evolve into their adult forms where they carry
>on the learning process. Taking this notion into consideration, Turing's
>idea is that instead of trying to model adult intelligence, we model the
>child brain and its processes and then, given experiences and guidance, it
>might one day become the adult brain that would be considered artificially

The notion of allowing a machine to learn upon it's own and not attempt to
directly place knowledge into algorithms is in my view a good one. If we
were to try to build knowledge in to a machine, them we would be dictating
the structure which it would take. This could easily be a misconception and
so could prevent success. The fact that we cannot study neural activity and
acquire knowledge indicates that we would be unable to introduce it
manually successfully either.

This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST