From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Wed Mar 21 2001 - 12:11:25 GMT
On Thu, 1 Mar 2001, Yusuf Larry wrote:
> http://cogprints.soton.ac.uk/documents/disk0/00/00/04/99/index.html
> Yusuf L:
> The analysis of the question: "can machines think?" using a "game"
> introduces the concept of winning and losing, of right and wrong, which
> distracts us from the matter in hand. Hard coding a machine for example,
> that could fool C, would not imply that this machine can think but just that
> C was not able to identify it as a machine.
Correct. But once the "trick" is T2- or T3-scale (lifetime penpal or
robotic performance capacity, totally indistinguishable from our own)
we have long left the domain of games/tricks/fooling and entered the
scientific domain of reverse engineering brain capacity.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html
> Yusuf L:
> we are encouraged to judge the machine and the human by performance
> capacity and not by aesthetics. Hence, a machine that can think is one that
> can do what we do, indistinguishably from the way we do it. This has nothing
> to do with the look and feel of the system.
Yes, but in excluding appearance (it's appearance, not really
"aesthetics") have we unwittingly excluded a lot of performance capacity
too (T3)?
> Yusuf L:
> the emphasis needs to be more on developing true systems, i.e. ones that can
> truly analyze a question or situation and reply based on an understanding of
> this question, rather than on a set of inputs and outputs.
But more is there, besides external inputs, internal states, and
outputs? What do you mean be "truly"? Isn't T2/T3 success its own
reward?
> > TURING:
> > It is natural that we should wish to permit every kind of engineering technique to be used in
> > our machines. We also wish to allow the possibility than an engineer or team of engineers
> > may construct a machine which works, but whose manner of operation cannot be
> > satisfactorily described by its constructors because they have applied a method, which is
> > largely experimental.
>
> Yusuf L:
> In building the thinking machine, we should not restrict ourselves to
> particular techniques but rather explore every possible form of engineering.
> And just like humans can not explicitly describe how it is we think, or how
> another person thinks, does something intelligent or in fact if they think
> or are intelligent, we may not be able to do the same with the machine or
> indeed need to, in order to justify the machine as a thinking system.
Not quite.
First, Turing's first sentence above, I think, suggests that he did NOT
just mean computation as the only possible means of passing the TT. (I
also think that, apart from the appearances problem, he would have
accepted a T3 robot as a legitimate candidate for being tested by T2.)
However, he does seem to go on to reject this statement later (see
below).
But in forward engineering, the objective is to generate a performance
capacity because it is useful. For this, even a system built while
sleep-walking would be useful, because although the designer does not
understand how it works, he does know how to build it, so can get the
performance.
For reverse engineering, however, it's not enough just to generate the
performance. We have to have a causal/functional/computational
understanding of HOW it generates the performance. I think Turing means
here that we can arrive at the successful device analytically, by
calculating and proving things in advance, or experimentally, by trial
and error (this is why AI is like maths and also like experimental
science), and that if we do it experimentally, we might get forward
engineering success before we have managed to reverse engineer its
function.
But I don't think what Turing means is the fact that we don't know how
our own minds work (hence we cannot build such models simply by
introspecting on how our minds do it, and "reading off" the algorithm or
mechanism).
> Yusuf L:
> Genetically engineering a person is more reproduction than the construction
> of a thinking machine. Even though I agree that humans like machines are
> causal systems thus in some sense making them machines, we would not be able
> to understand or describe how a genetically engineered person works anymore
> than we can describe ourselves.
Correct.
> > TURING:
> > we only permit digital computers to take part in our game.
This does sound like it contradicts the statement that any machine is
eligible. And the grounds for excluding everything but computers seem to
be rather arbitrary, if not fuzzy. Yes, we know from the Church/Turing
Thesis that a Universal Turing Machine (approx. a digital computer) can
"emulate" (= simulate) any other Turing Machine, and that for every
machine of any kind, there is always a Turing Machine that is "Turing
Equivalent" to it (i.e., simulates it). So that means a computer can
simulate any machine. So it sounds as if there's no point in having
candidates other than computers, because there will always be a
computer that can simulate that candidate.
But what about the kinds of (implementation-DEpendent) properties whose
simulation is Turing-equivalent but not causally equivalent, such as
planes flying, furnaces heating, stomachs digesting, (brains thinking?)?
A digital computer could not pass a Turing Test for flying! The only
machine that could do that would be a plane.
Here is where the fuzziness of Turing's "out-of-sight" constraint, and
restriction to T2 (penpal performance only) come in. Turing has simply
ASSUMED that no other performance capacity than verbal communication
capacities relevant to having and demonstrating intelligence (thinking,
having a mind). And this assumption could be dead wrong in at least
three respects:
(1) T2 may not be a strong enough test for intelligence. (Penpal
capacity may not put the candidate through enough of its paces, just
as a toy test on chess playing alone, or a mere 10-minute test would
not.) Hence T2 may fail Turing's own criterion of Total
Indistinguishability in performance capacity.
(2) T3 capacity may be needed even to pass T2 (and a digital computer
alone cannot have T3 capacity: only a hybrid sensorimotor system could,
whether or not you are directly testing its sensorimotor capacities).
(3) Turing too, like many others, may have been seduced by the fact
that (because of the other-minds barrier), thinking is not observable
(except to the thinker!). So it looked as if there was no danger that a
computer might be leaving something essential out with thinking, as it
obviously is with flying: It looked as if a computer could BE everything a
thinker is, as long as it could DO everything a thinker could do. But
to exactly ONE thinker (the candidate itself) there is something as
observable about thinking as there is about flying, and that something
could be absent in a digital computer alone (indeed, Searle's Periscope
shows that it IS absent).
So Turing equivalence and the irrelevance of appearance may have led
Turing a little astray here. He should have allowed ANY machine to be a
candidate (why not?), and he should have been prepared to admit other
capacities the mind has over and above its penpal capacities. (Because
the Church/Turing Thesis is really about the "descriptive power" of
computation, and basically states that computation can describe just
about anything, I think Turing was similarly seduced by the descriptive
power of language, thinking that a penpal correspondence could describe
anything, including anything a robot can see or do!)
Was this a big mistake on Turing's part, or just the failure to dot some
i's and cross some t's. I'm inclined to think it was more the latter.
But his failure to dot those i's and cross those t's may have had big
consequences, leading a lot of lesser minds needlessly astray.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html
> Yusuf L:
> Does this restrict our search for our answer to the question, “can
> machines think”? For instance, what if a digital computer is incapable of
> thinking because our ability to think, be intelligent etc is dependent on
> our physiology? John Searle [Searle 1980, Searle 1992], believes that what
> we are made of is fundamental to our intelligence. Perhaps the machine that
> would be able to think and be intelligent would have to process information
> in parallel like the brain and would be able to use true fuzzy logic rather
> than an adaptation of binary logic. Such a machine does not fall under the
> description of the word digital computer.
Parallel processing doesn't (but it can be serially approximated as
close as you like); and fuzzy logic can certainly be implemented
digitally. But you are quite right that there are definite physical
functions that a machine (including the brain) could have, functions on
which intelligence could depend critically, that are noncomputational.
And such functions would be missing from a digital computer, just
doing implementation-independent computing.
> Yusuf L:
> Prior to this statement, Turing had described digital computers as machines
> capable of performing computation; where computation is
> implementation-independent, semantically interpretable, syntactic rule
> based, symbol manipulation. Following Turing’s argument of
> indistinguishable entities, if this digital computer can mimic the human
> computer indistinguishably then it could be the human computer. This implies
> that the human computer is simply a machine that performs computation. I
> disagree. However, it is possible that a computational system might be able to mimic
> some to many of the human computer functions (hence the very closely), in
> which case, this is possible but mimicking is as different from being as
> simulation is as different from real life.
This is a little vague. We are no longer speaking of "mimicry" but about
T2 or T3 (or even T4). Are you saying (with Searle) that only T5 (the
full, exact, organic functional explanation of real biological brain
activity) can explain or implement a mind? (The position is tenable, but
do you really want to defend it, and argue that a lifelong T3 robot, or
even a lifelong T4 synthetic body and mind could not really have a
mind?)
> Yusuf L:
> a digital
> computer cannot be programmed to mimic the human computer...
> because we are not discrete state machines but truly random
And you think "true randomness" is the critical element for implementing
a mind? Why?
> Yusuf L:
> Passing the TT requires a lifetime capacity and lifelong
> indistinguishability, to anyone and everyone. It is only then we can start
> to consider the thinking machine.
Correct.
> > TURING:
> > I believe that in about fifty years' time it will be possible, to programme
> > computers, with a storage capacity of about 109, to make them play the
> > imitation game so well that an average interrogator will not have more than
> > 70 per cent chance of making the right identification after five minutes of
> > questioning.
And what on earth does 70% in 5 minutes test, or show?
> > TURING:
> > The original question, "Can machines think?" I believe to be too meaningless
> > to deserve discussion. Nevertheless I believe that at the end of the century
> > the use of words and general educated opinion will have altered so much that
> > one will be able to speak of machines thinking without expecting to be
> > contradicted.
>
> Yusuf L:
> If it is such a meaningless topic, why are we discussing it? To say rather
> than check whether a machine can think, make it take the TT (imitation test)
> and if it is indistinguishable it can think is a good approach but does not
> diminish the scientific interest of whether machines can think. If we were
> to develop a digital computer that passes the TT, people would still be
> interested in whether it can or cannot think. Better yet what if we develop
> a system that passes the TT but can’t think?
I followed you up to the last sentence: If a TT passer (T2? T3?) passed
but didn't in reality think, how could you possibly know that? That's
the whole point of Turing Indistinguishability. (It could be summarized
as the modified proverb: "Only a fool argues with the indistinguishable.")
The question "But is is really thinking? does it really have a mind?"
may be undecidable by the rest of us, but I assure you that (if the
answer is yes), it is not only meaningful but the MOST meaningful
question for at least one of us: The candidate himself. Hence it is and
always was false, and almost fatuous, to say that the question is
meaningless! (Let us learn the wise lessons, not the foolish ones, from
Turing Indistinguishability. [By the way, Larry is not the one I am
calling foolish in this little homily! One of the wonders of Skywriting
is that you can argue, in public, with a dead author!])
> Yusuf L:
> Turing acknowledges [the Lucas] argument but suggests it isn’t totally applicable
> because if Godel's theorem is to be used we need in addition to have some
> means of describing logical systems in terms of machines, and machines in
> terms of logical systems.
Kid-Sib couldn't follow that, but have a look at the Lucas paper and the
Skywriting on it:
http://cogprints.soton.ac.uk/documents/disk0/00/00/03/56/index.html
http://www.cogsci.soton.ac.uk/~harnad/Hypermail/Foundations.Cognitive.Science2001/0009.html
> > TURING:
> > 4. The Argument from Consciousness: This argument appears to be a
> > denial of the validity of our test. According to the most extreme
> > form of this view the only way by which one could be sure that
> > machine thinks is to be the machine and to feel oneself thinking.
>
> Yusuf L:
> Perhaps consciousness is not what we should be looking for but that implicit
> representation of how we do what we do and are aware of what we are doing.
I couldn't follow that last bit again. But about consciousness: Yes and
no. No point looking for it, because the other-minds problem prevents
you from being able to know whether it's there. But that's not true of
the candidate himself! And that's not a denial of the "validity" of the
TT, but it is a definite limitation on it!
> Yusuf L:
> the computer that has been taught would probably only be
> equivalent to a pet.
Still plenty of Nobel Prizes in AI for doing that (and all the
fundamental problems would already be solved).
> Yusuf L:
> Because digital computers as described by Turing might
> not be capable. Sensory-motor features might and most probably will be
> required.
Agreed.
> Yusuf L:
> learning introduces problems like the credit/blame
> assignment and frame problems. I like the idea of taking the development in
> stages, I believe that trying to actually develop a system indistinguishable
> from a 2yr old is a complex enough task and that we should try to develop
> smaller components or machines indistinguishable from less complex life
> forms and work our way up.
Yes, we can only scale up to T3 gradually.
> Yusuf L:
> Turing approached a very complex discussion with an ideal however he left a
> lot of his ideas open to misinterpretation. I also believe that he probably
> didn’t realise the impact his paper would have on AI as a whole and
> probably didn’t even see the genius in some of his ideas and what he said
> until others started interpreting them.
I agree.
Stevan Harnad
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:25 BST