From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Wed Mar 21 2001 - 15:18:22 GMT
On Sun, 4 Mar 2001, Godfrey Steve wrote:
> Turing: Computing Machinery and Intelligence
> http://cogprints.soton.ac.uk/documents/disk0/00/00/04/99/index.html
> Godfrey:
> 'machines' and 'think' are both poorly defined, and after much
> debate I have discovered, very hard to define.
Machines are easy to define: They are causal systems. Every physical
system obeying the causal laws of physics (planets, molecules, animals,
cars) is a machine.
But what is thinking? We can't define it until we know what it is and
how it works, but we can certainly point to it> Each of us knows
exactly what he means by "I am thinking". THAT's thinking. And it's
whether or not a machine we have built does THAT that we (and Turing)
are discussing here.
> Godfrey:
> A machine can be
> classified as thinking by passing this test, fooling the interrogator
> into thinking it is human.
No, a machine that passes the Turing Test can only be "classified" as
having past the Turing Test. That does not mean it was thinking (see
above).
And the real point is not about "fooling" the interrogator into thinking
it was HUMAN but about the interrogator being unable to tell it apart
from someone who has a mind. That does not necessarily mean the
interrogator has been FOOLED. He simply can't tell them apart (Turing
Indistinguishability). And one of the reasons he cannot do so might be
that they really do both have minds! In any case, methodologically, if
you can't tell them apart, you certainly have no basis for saying one of
them has a mind and the other doesn't!
> Godfrey:
> What would be the outcome if one person is fooled and cannot tell
> the difference, but another cannot? Does the machine pass the test
> or not?
It is not about fooling anyone but about designing a candidate that
really has capacities indistinguishable from our own, for a lifetime
if need be.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html
> Godfrey:
> It is important that the only attribute of a human that is being tested
> here is thinking. It would be incorrect to let physical differences of
> the machine and the human determine whether or not a machine is
> judged as being able to think. For this reason it is important that the
> interrogator is kept in a separate room from the two competitors,
> and that no direct contact is possible.
But this "out-of-sight" constraint has led to some problems and
ambiguities, as we have seen. It cannot test T3, for example. And
T2 makes it seem as if thinking is all just symbols and symbol
manipulation; maybe it's not.
> Godfrey:
> I think that it is unfair to say that for something to be able to think,
> it must be able to mimic the behaviour of a human. Surly dolphins
> are intelligent, but they would probably be unable to pass the
> Turing test, as they are not human.
We will surely have to try to scale up from simpler, subhuman TTs to
human ones. The trouble with subhuman TTs is that they may also be
subtotal (and the TT is predicated on TOTAL equivalence, indeed total
indistinguishability, in performance capacity). We do not know the
total behavioral capacity of other species nearly as well as we know
our own (even though their capacity is smaller). More important, we
don't have the evolved and learned "mind-reading" capacities with other
species that we have with our own (the way we can tell what's "on
someone's mind"). These intuitive capacities too (in the tester) are
critical for the sensitivity of the TT. Turing Equivalence is just
about quantitative performance, but Turing Indistinguishability is
about qualitative "style" too, and we humans are good at detecting
anomalies in humans, but not in nonhumans.
Also, T2 (which requires language) is only possible with people. (Let's
not get into it here, but there is no evidence whatsoever that we could
include dolphins in this skywriting discussion, and that is NOT because
they cannot type, or because no one has yet "translated" their "language"
into our own: they communicate, and they have codes, but they are not
languages. As a quick rule of thumb: If it's really a language, you can
say anything in it that you can say in English.)
The human T3 of course subsumes T2, but an animal T3 would not. No
doubt animal T3 models will be milestones along the road to the human
T3. And conceivably the animal T3 candidates will already have minds.
With T3s based only on the capacities of mentally retarded people (who
DO have minds), the obvious differences might make even the human
(wrongly) fail the test. So for the same reason the retarded TT is not
definitive and not the right target (total capacity, total
indistinguishability), neither is the animal TT. Yet it will no doubt
be a milestone (many milestones, actually) along the way.
> Godfrey:
> A machine may still be
> thinking, even if it is at a lower level than human thought.
What does this mean? I suppose it's analogous to being less and more
intelligent. But don't forget, all the people and animals that we can
rank as being less and more intelligent (or as doing "higher" or "lower"
level thinking) are all really intelligent, really thinking. With a
machine it is not clear that they are doing THAT. All that's clear is
that they are generating the performance that in humans and in animals
seems to require THAT (thinking).
But at bottom, as mentioned repeatedly, the only one who can know
whether or not a machine is really thinking, rather than just generating
performance, is the machine itself!
> Godfrey:
> If thinking is simply about making a decision about
> something based upon previous experience or mistakes, then
> machines can already do this.
Right. And thinking is not just about that. For THAT is just what you
said it is: "making a decision about something based upon previous
experience," whereas what we were asking about is thinking. (Think for a
moment. What you just did. That's what we mean. Can a candidate system
we have built do THAT?)
> Godfrey:
> By passing the Turing Test, a
> machine has proved that it can think as it is indistinguishable from
> something that we know can think.
It has proved absolutely nothing beyond that it can perform
indistinguishably from something we know can think!
> Godfrey:
> But if a machine does not pass
> the Turing test, it could still be thinking at a lower level.
I don't know what thinking at a lower level means. Does a stone think at
a still lower level? We're trying to avoid throwing around the word
"thinking" to freely, so it can continue to mean what I asked you to do
a moment earlier. A stone does not do a "lower level" version of what
you did a moment earlier (although, because of the other-minds problem,
we can't be sure about that either!).
To help resist arbitrarily projecting "lower level" "thinking"
capacities on anything that does anything, think instead of feeling,
because it's feeling that makes thinking thinking rather than merely
behaving, as in a Zombie. Are there "lower level" feelings in a stone,
or a computer? What does that mean? Ya either feel or ya don't;
otherwise, it's just about WHAT you happen to feel, which is a much less
important question of WHETHER you feel AT ALL.
> Godfrey:
> I think a better approach to the problem would
> be to design a machine that thinks and passes the Turing test as a
> consequence, rather than trying to develop a machine purely to pass
> the test.
Ahem. And your method for making it think, and confirming that it is
indeed thinking, please (as it is not the TT)?...
> Godfrey:
> If a human were to be grown in a
> laboratory environment, by taking a DNA sample and simply
> growing it, the resultant human would be man made, but they
> would not be eligible for the test, as humans did not design it.
Correct.
> >TURING:
> >we only permit digital computers to take part in our game.
>
> Godfrey:
> What would happen if a new form of computer were to be designed
> in the future that was not digital. Maybe this restriction is a bit too
> tight.
The restriction is too tight, alright (see prior discussion of machines
other than computers). But this future new kind of computer: Would it or
would it not conform to the Church/Turing Thesis? Would it be a Turing
Machine, like all the rest? If so, the difference is irrelevant; it's
just another hardware for doing the same old computation.
And if it violates the Church/Turing Thesis, there are LOTS of
logicians, mathematicians, and computer scientists eager to hear more
about this first very first exception to the C/T Thesis!
> Godfrey:
> The fact that digital computers can
> mimic any discrete state machine means that, if it were the case
> that the human brain is in fact a discrete state machine, then a
> digital computer would be able to model it, and therefore be able to
> think. We do not know if the brain is a discrete-state machine yet...
They can also simulate continuous dynamical systems, to as close an
approximation as you like.
> Godfrey:
> Godels's Theorem (1931) shows that 'in any sufficiently powerful
> logical system statements can be formulated which can neither be
> proved nor disproved within the system, unless possibly the system
> itself is inconsistent'. This is a problem, as the digital computer in
> the test will be using a logical system to imitate the behaviour of a
> human. If there are inconsistencies in the system, then the system
> may not produce the correct answer, if any answer at all, thus
> immediately revealing that it is a machine to the interrogator.
This is too quick. See the Lucas discussion.
http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Foundations.Cognitive.Science2001/0009.html
> Godfrey:
> I think that it would be possible to be able to discover if a machine
> was thinking or not, without the need to become the machine. A
> machine could be presented with a non-specific problem of any
> difficulty. If the machine were then to apply the best method it
> knows to solve the problem, could it be regarded as thinking? If it
> makes a mistake, but when attempting a problem of the same
> type adapts and gets it right, could that be regarded as thinking.
Your criteria can already be fully met (rather trivially) by existing
programmes. So do those think? (Turing proposed the TT to avoid quick,
arbitrary criteria like these).
> Godfrey:
> In my opinion, it depends upon the definition of thinking, as we
> cannot say something is thinking if we do not know exactly what
> thinking is. If something appears to be thinking, then due our lack
> of knowledge about thinking at the moment, is it not reasonable to
> say it is thinking.
No, we can point to it without knowing how to "define" it. And how much
does something have to "appear" to be thinking? Turing suggests that it
better be at least TT-scale. Otherwise a smart-looking rock qualifies
too. After all, there's no way to know we're wrong except by being the
rock...
> Godfrey:
> Turing talks about a machines inability to experience some of the
> things we take for granted, such as enjoying strawberries and
> cream, and how this leads to failures in other area of the machine,
> such as social skills. The machine would not be able to discuss the
> taste of strawberries and cream, as it would have never tasted them,
> and so on. Maybe in the future, a particular kind of sensor will be
> developed that detects taste. If it were fitted to a machine, then
> maybe it could have the experience.
The problem of experience (feelings) is not solved by adding sensors,
but the problem of grounding might be. Such a sensorimotor system,
however, would not longer be just a digital computer (T2) but a T3
robot.
> >TURING:
> >The claim that a machine cannot be the subject of its own thought
> >can of course only be answered if it can be shown that the
> >machine has some thought with some subject matter
Irrelevant. See the reply to Joe Hudson about "self-awareness".
http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Foundations.Cognitive.Science2001/0054.html
> >TURING:
> >It can also be maintained that it is best to provide the machine
> >with the best sense organs that money can buy, and then teach it
> >to understand and speak English. This process could follow the
> >normal teaching of a child. Things would be pointed out and
> >named, etc
>
> Godfrey:
> This is because of the symbol-grounding problem. A machine
> needs to be able to link some symbols in its vocabulary to objects
> in the real world. With out this it maybe performing meaningless
> symbol manipulation. If some of the symbols can have meaning,
> then other symbols can be defined in terms of these and the
> machine may be able to learn.
Correct. But notice that if sense organs are involved, Turing is no
longer speaking of T2 but T3. He may be ASSUMING that in that robot the
"thoughts" are just the computational activity, but what if they're the
computational AND sensorimotor activity together?
> Godfrey:
> I think that there is a difference between thinking and intelligence.
> The two are related, but it is very easy to get confused between
> them. Intelligence is a higher form than thinking. A machine
> does not have to be intelligent to be able to think, but it has to be
> able to think to be intelligent. Thinking is making informed choices
> based upon information available. I cannot say what intelligence is,
> but I am sure it is more than this.
I suggest you don't make two problems -- (1) is it or is it really
intelligent and (2) is it or is it not really thinking -- were one
problem is already more than enough!
Stevan Harnad
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:25 BST