Re: Turing: Computing Machinery and Intelligence

From: Patel Krupali (kp898@ecs.soton.ac.uk)
Date: Wed May 30 2001 - 18:41:33 BST


On Mo on Turing, A. M. (1950) Computing Machinery and Intelligence.
Mind 49:433-460.
http://cogprints.soton.ac.uk/documents/disk0/00/00/04/99/index.html

> Mo:
> Turing was correct to re-define the question using unambiguous
> terms for the imitation game. For a start a "machine" can refer to
> any causal system, be it physical or probabilistic. With these
> systems we understand their mechanisms. When we refer to the word
> "think" its definition describes a process of a state of mind. The
> neural chemical changes in the brain can be observed and monitored,
> but we at present do not understand how the brain works explicitly.
> Therefore we cannot have a causal system from the brain as we do
> not understand its mechanism.

Patel:
The definition of think above describes a process of a state mind. Has
stated above we do not know how the mind works explicitly therefore we
can not have casual system from the brain has we do not understand its
mechanism. This is fair point, however it is important to stress that
Turings Test was to propose precisely a mean of turing the problem of
machine thought or intelligence. This is very much an empirical test.
If we see it we believe it. The idea I get from what you saying above
is that if we understand the machine and its mechanisms then we have a
machine that thinks. According to Turings Test if you can discriminate
between the human output and the machine output and the results are
statistically significant then the Turing Test is passed.

This suggests to me that there is no difference between a thinking
human being and a sophisticated robot and it is would be right to say
that the robot is as capable of thinking as the human.

This is because the outputs are of the sort which, when produced by
human beings, would be said to involve thinking. Therefore this would
mean that the robot is in a state of mind according to your definition
above. It is possible for the robot to have a mind?

> Mo:
> Turing carries on with the notion that machines do not need to look
> human. If a pen-pal could convince you all your lifetime that they
> are human, and at your deathbed, in rolled your pen-pal. Clearly
> from all the electronics, you see it was just a machine, but
> without ever seeing it, you would have never known. On this basis
> you may not still believe that the machine can think, but it did
> fool you for a lifetime that it was human. So for text interaction
> at least, the machine became indistinguishable from a human.
> Alternatively, some mentally retarded humans do not seem capable of
> looking after themselves, let alone type out a coherent message.
> They may not even be believed as being human by the interrogator.
> So if these two participants were compared who would fall short of
> being classed as a human?

Patel:

I agree with this up to a point. I would think human a living
breathing, respiring and reproducing species. If your pen-pal does all
of the following would you think it were more human than the retarded
person?

> Mo:
> Turing must have foreseen that his game was only limited to a
> symbolic Q&A style reasoning. This was argued by Searle, with his
> Chinese-Chinese dictionary example, that with only symbols, a
> question in Chinese (if you did not already know Chinese) could not
> be understood, unless you search for the definition, but that too
> would be in Chinese. So you could continually regress for the
> definition, and never finding a understandable meaning (from your
> viewpoint, not from someone that understands Chinese). Searle
> deemed that symbols needed to be grounded for them to carry through
> any meaning. Another limitation with the Q&A is that, anything
> that is enclosed with the question that requires robotic
> functionality, such as describing a picture, or smelling an object,
> would not be possible unless sensorimotor-capabilities are
> provided. If the questions were a continual barrage of "look at
> this and describe for me the.." type questions. A human (as long as
> they were not blind and understood the image) could give the
> interrogator a detailed description. The machine without
> sensorimotor-capabilities could only guess from the context of the
> question what it was shown.

Patel:
I disagree with the idea that in Searles Chinese dictionary argument a
person can only understand the definition if it would be in Chinese
too. It is important to stress that it is not the speed of the person
we are looking at. It is the procedures the Chinese Room operates, not
the how fast it operates. Just because the definition is found at the
quicker pace, does not qualify the person intelligent or express an
understanding of his or her comprehension. I do not think the person
who knows no Chinese is given a system of instructions (in his native
language of English) which enables him to simulate the syntactic,
semantic and sequential properties of Chinese conversation without
actually understanding actually needing any understanding of Chinese.
the fact that definition are found in Chinese and the output of the
person indiscriminately simulated that of someone who does speak
Chinese does not itself demonstrate that the simulation understands
Chinese. So there is an element of understanding.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST