Re: Searle's Chinese Room Argument

From: Harnad, Stevan (harnad@cogsci.soton.ac.uk)
Date: Wed Jan 17 1996 - 20:44:07 GMT


> From: "Baden, Denise" <DB193@psy.soton.ac.uk>
> Date: Thu, 7 Dec 1995 12:16:09 GMT
>
> I've got brain ache thinking about this one. In many ways I follow, and
> agree with what Searle says. However, just as he accuses Berkeley of
> moving the goalposts, Searle also varies his point of attack. I agree
> that computers, simply by processing symbols, do not exhibit real
> understanding or intentionality, as these symbols have zero meaning to
> them. However, he also disputes that AI work provides any significant
> contribution to understanding.

Does he? He explicitly says he is only attacking "Strong AI" (which
is the same thing as computationalism), according to which "the mind is
a computer programme" (i.e., cognition is computation), "the brain is
irrelevant" (i.e., the hardware is irrelevant), and "the Turing Test is
decisive" (if there's no way we can never tell it apart from someone
with a mind, then we have no basis for denying it has a mind).

But there is still "Weak AI" ("computabilism") according to which
computers can test our theories about how the mind works -- just as they
can test our theories about how the planetary system works, or cars, or
planes...

See:
ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad82.neoconst

> On this point, I tentatively disagree. Searle claims that putting a
> robot in the room, with access to images of the outside world, with
> intentionality, in the form of a man, would still not enable the man to
> understand Chinese, beyond manipulating symbols. However, to use his
> example, if the man read a squiggle, and then saw that (through robot
> eyes) displayed over a hamburger restaurant; then read a squoggle and
> saw that on a waiters uniform, he would start to 'understand' the
> symbols, so that they had real meaning. The' meaning' as opposed to the
> symbol arises from the fact that in biological organisms, certain
> things are weighted (by genes, instinct) so that they matter eg food.
> We also have a limbic system which in effect enables satisfaction i.e
> it matters to us that we have the things that matter.

You're going a bit too fast here. First, we're not talking about a man
using robot sensors to learn about the world. We are talking about what
is really going on inside a real man: Is it just that the right software
is running? that he is just an implementation of the right
computations? Searle shows this cannot be true for a computer penpal
that we think understands Chinese, because he can "run" the same
programme himself, doing exactly what the computer does, computationally,
without understanding any Chinese in so doing.

A robot is another story. Searle cannot BE the robot without seeing, in
the way he CAN be the running computer programme without understanding.
And if all he does is run the programme that might be going on inside
the robot, then he's not being the whole system, and then the "System"
reply is correct ("Why should Searle have the whole system's mental
states if he is not being the whole system?"). But the System reply does
not work against the standard pen-pal version of the Turing Test, for
there Searle IS the whole system.

You are right about the need to ground symbols, but that is not a
refutation of Searle's Chinese Room Argument, which only applies to
computers and the pen-pal version of the Turing Test (T2), not to
robots and the robotic version of the Turing Test (what I've called the
"Total Turing Test" or T3).
See
ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad89.searle and
ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad90.sgproblem

As to what "matters" to us: Even a grounded robot that passes T3 won't
guarantee that you've captured that.
See:
ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad91.otherminds

> The most obvious difference I see between brains and computers, is that
> these factors are extensively interconnected with our information
> processing abilities, so that symbols have meaning for us. If this is
> so, then it would be unfair to claim that computational models of
> information processing, add nothing to our understanding of the mind,
> although it would be reasonable to claim that they leave plenty out.
> Searle claims that intentionality can only be a biological
> phenomenon.

Not quite clear what position you are taking here. Sure the symbols in
our heads (whatever they are) have meaning, unlike the ungrounded
symbols in a computer. But what IS that property that they have, then?
Is it just groundedness (the capacity for T3 robotic interactions with
the things that the symbols are about)? Or is it something more, having
to do with the biochemical stuff we happen to be made out of (T4)?

The truth could go either way, but Searle's Chinese Room Argument has no
bearing on it, for that argument only refutes one possibility: That
it's all just computational, that to have mental states, all you need
to do is run the right programme, on any hardware that can run it.

> This is where I got brain ache, trying to decide whether we could
> programme in these extra factors that make things matter to us.
> Couldn't a computer that was programmed to 'weight' symbols with a
> feedback system- no forget it, I agree with Searle, because the
> computer just wouldn't care if it got it right. Aha, yes it would if
> you programmed it to resist being unplugged - no that won't work
> either. Did you know that Data - the robot was considering this problem
> in Star Trek only the other day?

Welcome to the peculiar form of brain-ache caused by pondering the
mind/body problem!

By the way, does Data FEEL anything when you pinch him, or is he just
going through the motions? If not, does he MEAN anything when he says
something, or is he just going through the motions? Is it just
meaningful to YOU, the way a book is? The symbols don't mean anything
TO the book; the book itself does not mean anything by them, because
there is nobody home in there. Is there anybody home inside Data? I
think the answer would be "yes" if Data really existed -- I mean as a
real robot, rather than just a fictional one, for Real Data would pass
T3, and I don't believe anything could do all that with nobody home:
otherwise, why would there be anyone home in any of us, since we are
functionally indistinguishable from Data? The Evolution that shaped us
could never have told us apart (if Data could also reproduce), for, like
us, Evolution is not a mind reader; it too suffers from the "other
minds" problem...

> A last question. On p 422 tp left Searle says:
>
> JS> What matters about brain operations is not the formal shadow cast
> JS> by the sequences of synapses but rather the actual properties of the
> JS> sequences. All the arguments for the strong version of AI that I have
> JS> seen insist on drawing an outline around the shadows cast by cognition
> JS> and then claiming that the shadows are the real thing'
>
> What does he mean?

He means that a computer simulation of neurons will not have the same
properties as neurons, any more than a computer simulation of a plane
can fly. (He is also making a rather strained analogy with the shadows in
Plato's Myth of the Cave; those shadows had nothing much to do with formal
computer models, even though they shared the word "form" in Plato's
theory of the formal essences of things; Plato thought the "real" thing was
not the red apple, put the form of redness and appleness. A rather
obscure idea, if you think of it seriously; it makes sense only when
one considers the reality of things like numbers, which really do seem
to exist only as formal entities, rather than concrete ones.)

Chrs, S



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT