Re: Turing Test

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Wed Feb 16 2000 - 19:24:27 GMT


http://cogprints.soton.ac.uk/abs/comp/199807017

On Thu, 4 Feb 1999, Shaw, Leo wrote:

> Shaw:
> One question that could be asked about this test is whether it is
> possible for a machine to deceive the interrogator by applying a
> (comprehensive) set of rules. Perhaps it could be argued that,
> over a long period of time, the machine would require the ability
> to 'think' in the same way as the interrogator in order to
> maintain the deception. Surely the outcome will also depend on the
> ability of the interrogator to ask appropriate questions, and on
> his or her preconceptions of how machines behave.

Better to think of the test as a lifelong pen-pal that, unbeknownst to
you, is actually a computer. Where's the deception? The machine has to
have the capacity to do anything a real pen-pal can do (in a letter),
and to do it so well that you would never suspect it was anything
other than another real, live pen-pal.

No deception. Real capacity. The only question is whether "applying a
(comprehensive) set of rules" (algorithms) to the symbols in the
incoming letters would indeed be able to generate outgoing letters
indistinguishable from those of a real pen-pal:

Could it?

And if it did, would that really be thinking?

And if not, why not? And what WOULD be real thinking?

> > TURING:
> > We also wish to allow the possibility than an engineer or team
> > of engineers may construct a machine which works, but whose
> > manner of operation cannot be satisfactorily described by its
> > constructors because they have applied a method which is
> > largely experimental
>
> Shaw:
> This could be important, because it removes the requirement that
> the designer of the system should understand its working. Assuming
> that it is possible to construct a 'thinking' machine and
> establish that it can 'think' (the original problem), the engineer
> would not need to understand the thought process itself. For
> example, if a neural network of sufficient complexity could be
> constructed and trained so as to pass the Turing test, the
> designer would almost certainly be unable to explain its operation
> at a low level.

You're right. If the TT is passed by a machine we design but we don't
know how it did it, that is still successful a TT-passer; we just don't
understood how it did it. It's as if we built an airplane by luck,
without knowing how we did it. It's still a successful plane (and
proves, if anyone doubted it, that man-made devices can fly), but it
hardly counts as an advance in our understanding of aeronautical
engineering, because we know only THAT it can fly, not HOW.

There are some analogies here to algorithms that generate results that
surprise the designer of the algorithm. But that's not quite the same
thing. For if the designer does understand how the algorithm works, he
doesn't have to also know in advance everything it will do under every
possible condition.

So knowing HOW a TT-passer works still does not imply that we can
predict its every move, any more than we can predict a human's every
move (or an airplane's).

Being able to design a neural net that can learn to talk, as a human
does, would not be bad! If we know how the net works and how it
learns, the fact that we don't know the detailed experiential path that
led it to become the successful pen-pal it eventually became does not
really diminish our overall understanding of how it works, does it?

> > TURING:
> > This special property of digital computers, that they can mimic
> > any discrete-state machine, is described by saying that they
> > are universal machines. The existence of machines with this
> > property has the important consequence that, considerations of
> > speed apart, it is unnecessary to design various new machines
> > to do various computing processes. They can all be done with
> > one digital computer, suitably programmed for each case. It
> > 'ill be seen that as a consequence of this all digital
> > computers are in a sense equivalent.
>
> Shaw:
> This seems to be quite a convincing argument in favour of machines
> eventually being able to think. Can't the brain be considered a
> discrete-state machine: surely a neuron either fires or it doesn't
> and it is this that determines the effect on the rest of the
> brain.

The brain may be (among other things) a discrete-state machine; but
that's not ALL it is; and there's no guarantee that just a
discrete-state machine can do all the things the brain can do.

There's not even a guarantee that a discrete state machine, even a
universal Turing Machine, can pass the Turing Test! It's just a
hypothesis.

> > TURING:
> > The claim that a machine cannot be the subject of its own
> > thought can of course only be answered if it can be shown that
> > the machine has some thought with some subject matter.
> > Nevertheless, "the subject matter of a machine's operations"
> > does seem to mean something, at least to the people who deal
> > with it. If, for instance, the machine was trying to find a
> > solution of the equation x2 - 40x - 11 = 0 one would be tempted
> > to describe this equation as part of the machine's subject
> > matter at that moment. In this sort of sense a machine
> > undoubtedly can be its own subject matter. It may be used to
> > help in making up its own programmes, or to predict the effect
> > of alterations in its own structure. By observing the results
> > of its own behaviour it can modify its own programmes so as to
> > achieve some purpose more effectively. These are possibilities
> > of the near future, rather than Utopian dreams.
> Shaw:
> Is this what is meant by 'thoughts'? Computers can alter their
> behaviour to improve some measure of performance, but they aren't
> really thinking, they are following rules.

You are right that such self-modification is not (necessarily) the same
thing as thought. But it is not clear that thought could not be based
on just rule-following either, is it?

> Shaw:
> Surely to say that an
> entity is the subject of its own thought implies that it has a
> concept of itself in relation to the rest of the world. Do we
> consider animals to be the subject of their own thoughts when they
> learn to perform tasks with greater aptitude?

Why do thoughts require a self-concept (or any particular concept)?
Until we know more about what it really is, "thinking" or
"intelligence" is just whatever it is that allows us do all the things
we can do when we are thinking. Why couldn't the only thought an animal
has be "ouch"? Why do the thoughts have to be fancy thoughts about
thoughts and about thinking and Turing Tests and self and so on? Isn't
"ouch" enough?

> Shaw:
> Another interesting criticism is that machines can only ever do
> what we tell them, to which the answer is:
>
> > TURING:
> > One could say that a man can "inject" an idea into the machine,
> > and that it will respond to a certain extent and then drop into
> > quiescence, like a piano string struck by a hammer. Another
> > simile would be an atomic pile of less than critical size: an
> > injected idea is to correspond to a neutron entering the pile
> > from without. Each such neutron will cause a certain
> > disturbance which eventually dies away. If, however, the size
> > of the pile is sufficiently increased, the disturbance caused
> > by such an incoming neutron will very likely go on and on
> > increasing until the whole pile is destroyed. Is there a
> > corresponding phenomenon for minds, and is there one for
> > machines? There does seem to be one for the human mind. The
> > majority of them seem to be "subcritical," i.e., to correspond
> > in this analogy to piles of subcritical size. An idea presented
> > to such a mind will on average give rise to less than one idea
> > in reply. A smallish proportion are supercritical. An idea
> > presented to such a mind that may give rise to a whole "theory"
> > consisting of secondary, tertiary and more remote ideas.
>
> Shaw:
> These analogies are interesting, because human beings are
> constantly thinking in some way or another without requiring
> explicit provocation. In some cases, thought is clearly
> structured, for example when we are solving a problem, but the
> rest of the time we can decide what to devote our thoughts to,
> subject to some initial stimulus. This can result in our 'state
> of mind' changing, so that, for example, after a period of time
> with no external stimulus, our response to a question might
> change. Perhaps if a machine could be seen to exhibit this kind
> of behavior, it could be considered to be 'thinking'.

But is this about whether the machine is really thinking at all, or
just about whether it happens to be able (or unable) to do certain
KINDS of thinking?

The example of the algorithm (or any physical-causal mechanism) that
goes on to automatically calculate or generate results its
designer did not know or predict should be enough to show that, in
principle at least, there's no reason some algorithm (or some causal
mechanism) couldn't do anything and everything I can do. An algorithm
or mechanism at its basis, generating it, does not imply that the
performance that comes out will appear automatic or mechanistic.

> > TURING:
>
> > Instead of trying to produce a programme to simulate the adult
> > mind, why not rather try to produce one which simulates the
> > child's? If this were then subjected to an appropriate course
> > of education one would obtain the adult brain. Presumably the
> > child brain is something like a notebook as one buys it from
> > the stationer's. Rather little mechanism, and lots of blank
> > sheets. (Mechanism and writing are from our point of view
> > almost synonymous.) Our hope is that there is so little
> > mechanism in the child brain that something like it can be
> > easily programmed. The amount of work in the education we can
> > assume, as a first approximation, to be much the same as for
> > the human child.
>
> Shaw:
> This paragraph seems to overlook some important points: The
> child's brain is presumably immediately capable of experiencing
> emotion, which must be a strong factor in determining its actions.

Yes, but who says the underlying physical basis of that emotion cannot
be a machine, or even a computer running a particular inborn algorithm?

> This is combined with a vast array of sensory inputs which
> contribute to the child's emotional state, so that 'education'
> could not be encapsulated in a simple dialog with a teacher.

No, but couldn't trial-and-error experience in the world -- a neural net
being shaped by the rewarding and punishing consequences of its
tentative actions -- be based on an algorithm too, with the learning
reconfiguring the system to adapt it to what it has learned?

But you are right that sensory input (and motor output) are not the
same thing as the symbols-in and symbols-out of the pen-pal Turing
Test (and this will prove important later on). Children do not learn
everything by verbal instruction; the words have to be GROUNDED in
something else first, something other than just more words.

> Furthermore, the child has a strong incentive to learn: survival.
> What motivation would a machine have to learn, wouldn't it need to
> experience pleasure and pain and other emotions as well? Surely
> the ability of a machine to learn to interact with human beings
> would depend on its ability to sympathise with their situation
> through experience of similar situations - wouldn't this require
> emotion?

Well, we don't yet know what is and isn't a machine; we might be
machines too. So it doesn't really help to ask what motivation a
machine may have to learn (if, for example, I am in reality a machine
of a special kind). Maybe the biological tendency to try to survive
(and eat, and reproduce) is all encoded algorithmically, or as some
other causal physical mechanism. Who knows? (And if not, what else
could it be?)

HARNAD, Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT