From: Bon Mo (firstname.lastname@example.org)
Date: Wed Feb 28 2001 - 11:34:53 GMT
Turing, A. M. (1950) Computing Machinery and Intelligence.
This paper by Turing serves to illustrate the symbolic
pen-pal test. This imitation game consists of a machine,
a human, and a interrogator (who is also human). The
interrogator serves to distinguish the machine from the
human using a series of well formed questions. Turing
explains the details, objections and conclusions for the
> If the meaning of the words "machine" and "think" are
> to be found by examining how they are commonly used it
> is difficult to escape the conclusion that the meaning
> and the answer to the question, "Can machines think?"
> is to be sought in a statistical survey such as a
> Gallup poll.
Turing was correct to re-define the question using
unambiguous terms for the imitation game. For a start a
"machine" can refer to any causal system, be it physical
or probabilistic. With these systems we understand their
mechanisms. When we refer to the word "think" its
definition describes a process of a state of mind. The
neural chemical changes in the brain can be observed and
monitored, but we at present do not understand how the
brain works explicitly. Therefore we cannot have a causal
system from the brain as we do not understand its
> In order that tones of voice may not help the
> interrogator the answers should be written, or better
> still, typewritten. The ideal arrangement is to have
> a teleprinter communicating between the two rooms.
In order to keep the contest as even as possible, any
physical aspects of the participants are hidden from
the interrogator. The idea in this example is to set up
text interaction only, possibly as modern day e-mail
interaction. It is the replies from the questions, that
the interrogator must make their decision from, not from
the way a contestant looks or talks.
> we should feel there was little point in trying to make
> a "thinking machine" more human by dressing it up in
> such artificial flesh.
Turing carries on with the notion that machines do not
need to look human. If a pen-pal could convince you all your
lifetime that they are human, and at your deathbed, in
rolled your pen-pal. Clearly from all the electronics, you
see it was just a machine, but without ever seeing it, you
would have never known. On this basis you may not still
believe that the machine can think, but it did fool you
for a lifetime that it was human. So for text interaction
at least, the machine became indistinguishable from a human.
Alternatively, some mentally retarded humans do not seem
capable of looking after themselves, let alone type out a
coherent message. They may not even be believed as being
human by the interrogator. So if these two participants were
compared who would fall short of being classed as a human?
> The "witnesses" can brag, if they consider it advisable,
> as much as they please about their charms, strength or
> heroism, but the interrogator cannot demand practical
Turing must have foreseen that his game was only limited
to a symbolic Q&A style reasoning. This was argued by Searle,
with his Chinese-Chinese dictionary example, that with only
symbols, a question in Chinese (if you did not already know
Chinese) could not be understood, unless you search for the
definition, but that too would be in Chinese. So you could
continually regress for the definition, and never finding a
understandable meaning (from your viewpoint, not from someone
that understands Chinese). Searle deemed that symbols needed
to be grounded for them to carry through any meaning.
Another limitation with the Q&A is that, anything that is
enclosed with the question that requires robotic functionality,
such as describing a picture, or smelling an object, would
not be possible unless sensorimotor-capabilities are provided.
If the questions were a continual barrage of "look at this and
describe for me the.." type questions. A human (as long as
they were not blind and understood the image) could give the
interrogator a detailed description. The machine without
sensorimotor-capabilities could only guess from the context
of the question what it was shown.
> it will be assumed that the best strategy is to try to
> provide answers that would naturally be given by a man.
Humans often make mistakes, they could be vague in their
answering of questions, they may take a long time to come up
with an answer or may not even understand the question.
An algorithm could be implemented to include errors and give
wrong answers if required. The problem I have with this is
that one way an individual tackles a problem can be different
to another, even if they were educated in the same way. It
would not be possible to program each different variation
that can be encountered. For a non-trivial question there
may be none up to multiple ways of answering it. Surely the
best strategy though is using a predetermined correct fact
to answer a question chosen by the machine to implement.
Unfortunately that fact may be in error, or obsolete to the
question. The machine must be required to continually learn,
new facts and rules, and to change the relationships between
the data it stores to accompany the changes. After all,
that is what humans do continuously.
> We also wish to allow the possibility than an engineer
> or team of engineers may construct a machine which works,
> but whose manner of operation cannot be satisfactorily
> described by its constructors because they have applied a
> method which is largely experimental. Finally, we wish to
> exclude from the machines men born in the usual manner.
Turing's wording is a little vague, to what degree can the
operation be described? If the engineers do not know the
mechanism even though they built it, then they have not
achieved any gain as they do not know how it will work.
I also take it that the exclusion means that the machines
are to be man-made (non-natural) and that we know their
> It may also be said that this identification of machines
> with digital computers, like our criterion for "thinking"
> will only be unsatisfactory if (contrary to my belief),
> it turns out that digital computers are unable to give a
> good showing in the game.
Turing now uses a digital computer as the machine to
satisfy the conditions of the game. Apart from my previous
arguments for grounding and sensorimotor-capabilities.
There must also be a way of measuring the quality of the
answers given by the computer and the human. One way is
for the interrogator to agree on an answer, with a certain
degree of error thrown in. This is obviously subject to
an individual opinion, so it is hard to write down explicitly
the criteria to 'pass' as question. The interrogator can
then add up all the resultant 'passes' and compare which
of A or B has the highest number, and choose that as the
candidate that gives 'the closest best answers considering
the questions'. Of course that candidate could be either
the computer or the human.
> A digital computer can usually be regarded as consisting of
> three parts: (i) Store. (ii) Executive unit. (iii) Control.
The store is the memory, that holds the data and the rules.
The executive unit is the central processor that carries
out the calculations based on the rules. The control
validates that the rules that are carried out, are believed
to be correct. These rules have to be well formed, the coding
of them is in a shape of a packet. Turing has made an
attempt to define possible simple instructions that the
machine can take. Needless to say though the requirements
for modern machines would be able to parse a sentence that is
introduced, and output a relevant sentence as the answer.
> The reader must accept it as a fact that digital computers
> can be constructed, and indeed have been constructed,
> according to the principles we have described, and that
> they can in fact mimic the actions of a human computer very
The description of the human computer is as follows: The
human computer is supposed to follow fixed rules without
deviating from them. The rules that are supplied can be
altered, and finally the human computer has unlimited
calculation processing ability. This description has been
carried through to the digital computer. Algorithms can be
built which adapt their rules. Learning is possible by
weighting the input and thresholding the value to produce an
output. So even a fixed algorithm can change its rules.
The processing ability of the digital computer depends on the
amount of free memory, and the instruction that it has to
carry out. It is assumed that neither are in error, and there
is enough memory and electrical power to process the
> Constructing instruction tables is usually described as
> "programming." To "programme a machine to carry out the
> operation A" means to put the appropriate instruction table
> into the machine so that it will do A.
A problem with non-trivial human actions, is that we cannot
explicitly define the processes as steps. A simple game
such as draughts has well defined rules and facts, that can be
followed and any deviation is not allowed. Human thoughts
cannot at present be tapped into, so that we can build rules
for each task that we do. We cannot explain our own thoughts
so it would be naive of us to program partial rules into a
system and hope that the system could carry out similar tasks.
> An interesting variant on the idea of a digital computer
> is a "digital computer with a random element.
Randomness is possible with algorithms. An engineer can
program an algorithm and know all its mechanisms, but
results can be produced that were unforeseen during
> This special property of digital computers, that they can
> mimic any discrete-state machine, is described by saying
> that they are universal machines.
A Universal Turing Machine, is a device that can do any
computation. A real digital computer is an approximation
to a Universal Turing Machine.
> I believe that in about fifty years' time it will be
> possible, to programme computers, with a storage capacity
> of about 109, to make them play the imitation game so well
> that an average interrogator will not have more than 70
> per cent chance of making the right identification after
> five minutes of questioning.
Fifty years on the power of the computer has grown
exponentially over the years. With Terabytes of storage
capacity, and 1000MHz+ clock speeds capable of carrying out
1 billion instructions a second. The average computer
is still short of matching human capabilities in generating
"good" answers. The criteria of the answers has changed.
The loebner prize is an award given to a computer system
capable of fooling a panel of 3 judges for 45 minutes.
So far no system has come close to this, what is required
though is Turing indistinguishability for a lifetime. A
system is no use if it cannot last that length of time.
> We may now consider the ground to have been cleared and
> we are ready to proceed to the debate on our question,
> "Can machines think?" and the variant of it quoted at
> the end of the last section.
The next section covers the contrary views. The theological
objection is a debate on whether God could have given
animals or machines a soul and be able to think. This is
more religious than scientific so I will not go into any
depth. The "Heads in the Sand" objection is on the basis
that man is superior to the rest of creation, and that the
consequences of machines thinking would be too dreadful.
Machines at the moment can learn new data and facts, what
they learn can be in error, and the machine can execute
wrong instructions. At present there are individual
algorithms that can be as intelligent if not more so than
humans, such as chess playing and arithmetic. What must
be done though is to create individual modules for each
human task, and to scale the modules up until you have a
full interpretation of human aspects. This is a long way
off and we have no problems at the moment about computers
become more "intelligent" than us. The mathematical
objection shows the limitations of mathematical logic,
which means limitations to the power of discrete-state
machines. The best known of these results is Godel's
theorem. The argument from consciousness is that machines
cannot have feelings. This of course can be proven wrong
by giving machines sensorimotor-capabilities. Taking
in analog world data and changing it into grounded
> Arguments from various disabilities. These arguments
> take the form, "I grant you that you can make machines
> do all the things you have mentioned but you will never
> be able to make one to do X."
Machines are seen to be very limited, when required for
a minutely different purpose they become useless. A couple
of these disabilities are feelings and mistakes. I
believe that as with humans, all you need is motivation
and less distraction with a task to be able to
accomplish something. Well developed algorithms and sensors
allow a computer to make informative decisions about its
input. The disabilities would be a challenge to program,
but with enough dedication and time, surely they would
> A variant of Lady Lovelace's objection states that a
> machine can "never do anything really new".
Like Turing I strongly disagree, surely even a simple
calculation is "new" if the machine has not implemented
it before? Aside changes such as voltage and magnetic
disturbance, an algorithm can be changed by allowing
it to be self-modifying. Any minute error in the code,
unseen at run-time, could stimulate random results.
You can predict the majority of results, but there may
always be superfluous results.
> The nervous system is certainly not a discrete-state
> machine. A small error in the information about the
> size of a nervous impulse impinging on a neuron, may
> make a large difference to the size of the outgoing
> impulse. It may be argued that, this being so, one
> cannot expect to be able to mimic the behaviour of
> the nervous system with a discrete-state system.
The nervous system is a physical device, we can model
its mechanism to any arbitrary closeness using a
discrete-state system. Obviously this would only be a
simulation, but it would be as close to the real thing
as anything else. It is true that a single pulse, may
make a large difference to the outgoing impulse, but
this is only due to the inter-connecting structure of
the brain, and the vast amount of parallel processes
that occur with each neuron. A discrete-state system
could still simulate these processes. The human brain
itself is not completely without fault, noise is a
common occurrence that may affect the impulses. This
can be simulated with an algorithm incorporating
errors within a discrete-state system.
> The argument from informality of behaviour. It is
> not possible to produce a set of rules purporting
> to describe what a man should do in every
> conceivable set of circumstances.
This is recognised as a knowledge representation
problem, You can pre-program all the facts and rules
into a system, but as soon as the algorithm is
asked to perform an instruction, that it has no rules
for. Then the system may crash alarmingly. The idea
is that it is not possible to program in all the facts
and rules, eventually there will always be something
the programmer did not think of. The system needs
some way of gathering explicit rules that it deems
useful, and to implement them into its algorithm.
> Instead of trying to produce a programme to simulate
> the adult mind, why not rather try to produce one
> which simulates the child's? If this were then
> subjected to an appropriate course of education one
> would obtain the adult brain.
The final section is about learning, from Turing's
quote he believes that from a child's brain, being
subjected to education would result an adult brain.
The problem with simulating a child's brain, is that
the child would be able to tell you less about how
their brain works than an adult could.
> We may notice three components. (a) The initial state
> of the mind, say at birth, (b) The education to which
> it has been subjected, (c) Other experience, not to
> be described as education, to which it has been subjected.
Needless to say at birth you can only tell the structure and
chemical processes of a baby, you would not know what
thoughts if any it is having, and their language would not
be sufficient to be able to tell you. The baby would be
subject to hereditary material, mutations and any natural
selection factors, thus giving countless combinations, which
means that each baby's brain is unique, and may not carry
out the same functionality. Education and experiences are
just as varied and even if you have the same as someone else
you may still not carry out a task the same way.
> We normally associate punishments and rewards with the
> teaching process. Some simple child machines can be
> constructed or programmed on this sort of principle.
This is part of the credit/blame assignment problem, if the
correct result is given then credit the inputs leading to
the result, and if an incorrect result occurs then blame
the inputs. The problem is how to assign the credit and
blame. This process requires a teacher that tells the child
machine when it gives a correct/incorrect response.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:17 BST