http://cogprints.soton.ac.uk/abs/comp/199807017
On Thu, 10 Feb 2000, Terry, Mark wrote:
> Terry:
> It does seem self-centred of us to define thinking on purely human
> terms. Does an animal or a baby think? Is this less valid than adult
> human thought? A definite point here; is imitation thinking? Is a
> child who copies a parent thinking, or merely behaving? Are these the
> same thing? Thought surely cannot be considered as imitation. If
> thought is imitation, what am I imitating as I write this?
Surely animals think too. And although imitating probably involves
thinking -- maybe is even a form of thinking -- surely it's not the
only form of thinking.
But the imitation game (Turing Test) is not about "imitating" thinking
(or thinking as "imitation") it is about EXPLAINING what thinking is by
designing a system that can do it.
> > TURING:
> > We only permit digital computers to take part in our game.
Yes, Turing said this. But was he justified in saying it? Is a digital
computer the only possible kind of machine? Does the fact that a digital
computer can IMITATE (just about) any other kind of machine imply that
all machines are really just digital computers? A computer can simulate
a forest fire: Does that mean a forest fire is just a digital computer?
If not, then surely the Turing Test should be open to ANY kind of
machine we design, not just programmable digital computers.
> Terry:
> Turing was well aware that computers in 1950 could not play the game
> well. He was concerned with whether or not any machine we can theorise
> could. Today we may argue that computers are sufficiently powerful,
> but we can always envisage more speed (in terms of computations per
> second), better programs (a set of instructions for the computer to act
> upon), etc. Because of this, if no one ever produces a computer which
> can play the game well, the question still holds. This makes it a very
> hard question to answer "no" to.
It's true that no amount of time spent FAILING to show that a machine
can do something is PROOF that it cannot do it. (Depending on what
the something is, it can sometimes be proved that a machine cannot do
it, but never by simply failing to get a machine to do it.)
Conversely, to prove that a machine CAN do it, all you have to do is
design ONE that can.
But none of this makes such questions unanswerable. We can still keep
trying to design a machine to pass the TT, and perhaps get closer and
closer to it. Or perhaps someone will come up with a proof that we
cannot (I doubt anyone will, because being able to pass as a real pen-pal
is rather too vague and complex and unformalizable a capability to be
formally proved impossible.)
But maybe there is something less than a proof yet more than mere
failure to do so. (Wait for Searle's Chinese Room Argument and the
Symbol Grounding Problem, to come!)
> > TURING:
> > (3) The Mathematical Objection
>
> > The questions that we know the machines must fail on
> > are of this type,
> > "Consider the machine specified as follows. . . . Will
> > this machine ever answer 'Yes' to any question?" The
> > dots are to be replaced by a description of some machine.
> > When the machine described bears a certain
> > comparatively simple relation to the machine which is
> > under interrogation, it can be shown that the answer is
> > either wrong or not forthcoming. This is the
> > mathematical result: it is argued that it proves a
> > disability of machines to which the human intellect is
> > not subject.
>
> Terry:
> There is no denying the validity of this argument. Turing points out
> that Humans make mistakes as well, but this seems an unsatisfactory
> response. If a human can give an answer to this problem satisfactorily,
> then any computer will fail the test. I asked a human this question.
> The answer was "not forthcoming". Why should machines be able to
> understand problems a human gets tied up in?
You've misunderstood this. Turing means exactly the opposite here: He
is referring to certain things that computers have been PROVEN to be
unable to do. Hence these are things at which we know a TT candidate
that is a computer must fail, yet we know that real people can do
them. So people can't be computers! [Is that a valid argument?]
This is next week's assignment, as a matter if fact:
J.R. Lucas (1961) Minds, Machines and Goedel. Philosophy 36 112-127.
http://cogprints.soton.ac.uk/abs/phil/199807022
> > TURING:
> > (4) The Argument from Consciousness
>
> > This argument is very, well expressed in Professor
> > Jefferson's Lister Oration for 1949, from which I quote.
> > "Not until a machine can write a sonnet or compose a
> > concerto because of thoughts and emotions felt, and not
> > by the chance fall of symbols, could we agree that
> > machine equals brain-that is, not only write it but know
> > that it had written it. No mechanism could feel (and not
> > merely artificially signal, an easy contrivance)
> > pleasure at its successes, grief when its valves fuse,
> > be warmed by flattery, be made miserable by its
> > mistakes, be charmed by sex, be angry or depressed when
> > it cannot get what it wants."
>
> Terry:
> Turing argues that the extreme view of this argument is
>
> > TURING:
> > the only way to know that a man thinks is to be that particular man.
>
> Terry:
> Turing believed that this problem does not have to be debated to answer
> his question. The fact that we don't understand something doesn't mean
> we can't define it and recognise an instance of it by the instances
> characteristics. I don't know how an aeroplane works, but I can still
> identify one. This point leads to interesting questions such as: is
> thinking a requisite of consciousness ?
Or is consciousness a requisite of thinking?
In any case, what Turing meant here was that if we got too sceptical
about whether the computer thinks (and insisted that "I can't know that
without BEING the computer"), then we would have to be just as
sceptical about one another, and that would not only be silly, but it
would lead nowhere.
But is it equally silly to be sceptical about one another's minds and
about a computer's mind? And does it really lead nowhere to be
concerned about how to tell whether or not an artificial system we have
designed really has a mind (thinks)? Or even about whether a natural
creature very different from us (a jellyfish, or a plant) does? If I
don't worry about it with you, because you are so much like me, am I
equally justified in not worrying about it when it's a computer?
Not if the computer's so much like you that I can't tell it apart from
a real person (replies Turing). So the question is: Is being unable to
tell it apart from a real pen-pal close enough?
This question will come up again.
> Terry:
> Turing then treats a similar argument, namely: can a computer enjoy
> strawberries ? If notions of pleasure and pain are, as many claim,
> 'all in the mind' then if a machine believes that it enjoys
> strawberries, doesn't it therefore do so ?
First we'd have to know whether it really "believes" anything at all
(for "believes" is the same as "thinks," or is intelligent, or as has a
mind...)
But with a pen-pal, at least you can ask, in as much detail as you
like.
> > TURING:
> > a machine cannot be the subject of its own thought
>
> We first have to say that a machine has thoughts to decide if it can be
> a subject of them. However, Turing states that a computer executing a
> program is "thinking" about this program. This definition seems to
> halt the entire question of machines thinking. If we accept this, all
> machines which execute some instruction are thinking about it. I cannot
> accept this definition.
You're quite right to make this objection. Shaw, Leo did too. TT-passing
might be evidence of thinking; but this stuff about what happens to be
going on inside the TT-passer is not itself evidence.
But you seem to disagree with your own objection when you go on to say:
> Terry:
> However, programs can be written which modify
> themselves/other programs/the machine. In this sense, then, a computer
> can indeed be the subject of its own thought.
No, it seems to me that, in this sense, a computer can indeed be said
to modify its own programs (symbols, states, data). Whether or not
that's thought is what is on trial here!
> > TURING: [Lady Lovelace's Objection]
> > "The Analytical Engine has no pretensions
> > to originate anything. It can do whatever we know how to
> > order it to perform" (her italics).
>
> Terry:
> This argument still rings true to a large extent. However, computers
> have successfully used inductive reasoning to infer new theorems
> (statements we believe to be true) from axioms (statements that are
> true). Thus, whilst the computer is 'only' following its instructions,
> it is formulating new ideas.
Correct.
> > TURING:
> > Who can be certain that "original work"
> > that he has done was not simply the growth of the seed
> > planted in him by teaching, or the effect of following
> > well-known general principles.
>
> Terry:
> So can anyone really claim that a new idea is not derived from present
> knowledge and rules, just as in an inference engine? Also, how many
> people actually come up with one single genuinely new idea in their
> lives?
You are spot-on again!
> Terry:
> Turing puts much faith in learning machines in the future. Learning
> machines do of course exist (neural networks, for instance). Turing
> considers problems such as playing chess as ways to develop new
> computing ideas. IBM's Deep Blue is an example of a computer who can
> play chess better than any human. This is of course a very abstract
> task. If every task a human can viably perform can be modelled in the
> same way, would this machine play the 'imitation game' as well as a
> man? If it could, would this machine actually be thinking? This type
> of question is exactly why Turing designed his test. A man believes
> that the machine is thinking. If the machine isn't thinking, its coming
> pretty close.
Being able to play chess is one of the many things we can do (and
indeed can do with a pen-pal). You are asking whether or not a computer
programme will eventually be able to do them all, i.e., be able to pass
the TT.
Yes, that's the question. And whether, if it does, it will really have
a mind (really be thinking).
HARNAD, Stevan
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT