http://cogprints.soton.ac.uk/abs/comp/199807017
From: Shaw, Leo <las197@ecs.soton.ac.uk>
Date: Wed 9 Feb 2000
In his paper 'Computing Machinery and Intelligence', Turing
considers the question 'Can machines think?'. In light of the
ambiguity concerning the words 'Machine' and 'Think' - he proposes
an alternative 'test' to answer the same question:
> TURING:
> The new form of the problem can be described in terms of a game
> which we call the 'imitation game." It is played with three
> people, a man (A), a woman (B), and an interrogator (C) who may
> be of either sex. The interrogator stays in a room apart front
> the other two. The object of the game for the interrogator is
> to determine which of the other two is the man and which is the
> woman. He knows them by labels X and Y, and at the end of the
> game he says either "X is A and Y is B" or "X is B and Y is A."
...
> It is A's object in the game to try and cause C to make the
> wrong identification.
...
> We now ask the question, "What will happen when a machine takes
> the part of A in this game?" Will the interrogator decide
> wrongly as often when the game is played like this as he does
> when the game is played between a man and a woman? These
> questions replace our original, "Can machines think?"
One question that could be asked about this test is whether it is
possible for a machine to deceive the interrogator by applying a
(comprehensive) set of rules. Perhaps it could be argued that,
over a long period of time, the machine would require the ability
to 'think' in the same way as the interrogator in order to
maintain the deception. Surely the outcome will also depend on the
ability of the interrogator to ask appropriate questions, and on
his or her preconceptions of how machines behave.
With regard to the definition of the term 'Machine' in the test,
Turing says:
> TURING:
> We also wish to allow the possibility than an engineer or team
> of engineers may construct a machine which works, but whose
> manner of operation cannot be satisfactorily described by its
> constructors because they have applied a method which is
> largely experimental
This could be important, because it removes the requirement that
the designer of the system should understand its working. Assuming
that it is possible to construct a 'thinking' machine and
establish that it can 'think' (the original problem), the engineer
would not need to understand the thought process itself. For
example, if a neural network of sufficient complexity could be
constructed and trained so as to pass the Turing test, the
designer would almost certainly be unable to explain its operation
at a low level.
Later in the paper, Turing asserts that a digital computer can
produce the same effects as any 'discrete state machine', in which
only two states of any element are considered, nothing in-between:
> TURING:
> This special property of digital computers, that they can mimic
> any discrete-state machine, is described by saying that they
> are universal machines. The existence of machines with this
> property has the important consequence that, considerations of
> speed apart, it is unnecessary to design various new machines
> to do various computing processes. They can all be done with
> one digital computer, suitably programmed for each case. It
> 'ill be seen that as a consequence of this all digital
> computers are in a sense equivalent.
This seems to be quite a convincing argument in favour of machines
eventually being able to think. Can't the brain be considered a
discrete-state machine: surely a neuron either fires or it doesn't
and it is this that determines the effect on the rest of the
brain.
One of the arguments that Turing defends against are that machines
will never be able to be the subject of their own thoughts:
> TURING:
> The claim that a machine cannot be the subject of its own
> thought can of course only be answered if it can be shown that
> the machine has some thought with some subject matter.
> Nevertheless, "the subject matter of a machine's operations"
> does seem to mean something, at least to the people who deal
> with it. If, for instance, the machine was trying to find a
> solution of the equation x2 - 40x - 11 = 0 one would be tempted
> to describe this equation as part of the machine's subject
> matter at that moment. In this sort of sense a machine
> undoubtedly can be its own subject matter. It may be used to
> help in making up its own programmes, or to predict the effect
> of alterations in its own structure. By observing the results
> of its own behaviour it can modify its own programmes so as to
> achieve some purpose more effectively. These are possibilities
> of the near future, rather than Utopian dreams.
Is this what is meant by 'thoughts'? Computers can alter their
behaviour to improve some measure of performance, but they aren't
really thinking, they are following rules. Surely to say that an
entity is the subject of its own thought implies that it has a
concept of itself in relation to the rest of the world. Do we
consider animals to be the subject of their own thoughts when they
learn to perform tasks with greater aptitude?
Another interesting criticism is that machines can only ever do
what we tell them, to which the answer is:
> TURING:
> One could say that a man can "inject" an idea into the machine,
> and that it will respond to a certain extent and then drop into
> quiescence, like a piano string struck by a hammer. Another
> simile would be an atomic pile of less than critical size: an
> injected idea is to correspond to a neutron entering the pile
> from without. Each such neutron will cause a certain
> disturbance which eventually dies away. If, however, the size
> of the pile is sufficiently increased, tire disturbance caused
> by such an incoming neutron will very likely go on and on
> increasing until the whole pile is destroyed. Is there a
> corresponding phenomenon for minds, and is there one for
> machines? There does seem to be one for the human mind. The
> majority of them seem to be "subcritical," i.e., to correspond
> in this analogy to piles of subcritical size. An idea presented
> to such a mind will on average give rise to less than one idea
> in reply. A smallish proportion are supercritical. An idea
> presented to such a mind that may give rise to a whole "theory"
> consisting of secondary, tertiary and more remote ideas.
These analogies are interesting, because human beings are
constantly thinking in some way or another without requiring
explicit provocation. In some cases, thought is clearly
structured, for example when we are solving a problem, but the
rest of the time we can decide what to devote our thoughts to,
subject to some initial stimulus. This can result in our 'state
of mind' changing, so that, for example, after a period of time
with no external stimulus, our response to a question might
change. Perhaps if a machine could be seen to exhibit this kind
of behavior, it could be considered to be 'thinking'.
Towards the end of the paper, Turing considers the possibility of
'educating' a primitive machine:
> TURING:
> In the process of trying to imitate an adult human mind we are
> bound to think a good deal about the process which has brought
> it to the state that it is in. We may notice three components.
> (a) The initial state of the mind, say at birth,
> (b) The education to which it has been subjected,
> (c) Other experience, not to be described as education, to
> which it has been subjected.
> Instead of trying to produce a programme to simulate the adult
> mind, why not rather try to produce one which simulates the
> child's? If this were then subjected to an appropriate course
> of education one would obtain the adult brain. Presumably the
> child brain is something like a notebook as one buys it from
> the stationer's. Rather little mechanism, and lots of blank
> sheets. (Mechanism and writing are from our point of view
> almost synonymous.) Our hope is that there is so little
> mechanism in the child brain that something like it can be
> easily programmed. The amount of work in the education we can
> assume, as a first approximation, to be much the same as for
> the human child.
This paragraph seems to overlook some important points: The
child's brain is presumably immediately capable of experiencing
emotion, which must be a strong factor in determining its actions.
This is combined with a vast array of sensory inputs which
contribute to the child's emotional state, so that 'education'
could not be encapsulated in a simple dialog with a teacher.
Furthermore, the child has a strong incentive to learn: survival.
What motivation would a machine have to learn, wouldn't it need to
experience pleasure and pain and other emotions as well? Surely
the ability of a machine to learn to interact with human beings
would depend on its ability to sympathise with their situation
through experience of similar situations - wouldn't this require
emotion?
Shaw, Leo <las197@ecs.soton.ac.uk>
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT