Re: Grandmother objections

From: Harnad, Stevan (harnad@cogsci.soton.ac.uk)
Date: Thu Dec 14 1995 - 22:02:51 GMT


> From: "Harrison, Richard" <RJH93PY@psy.soton.ac.uk>
> Date: Thu, 14 Dec 1995 12:01:09 GMT
>
sh> A nongrandmotherly version of this objection, however, points to the
sh> symbol grounding problem: The symbols in a computer are ungrounded; our
sh> brains are not.
>
> And this is Searle's point, isn't it? If machines are only
> programs or computational symbol manipulators then they do not have
> understanding in the way we do (even if they can pass either version
> of the Turing test).

Searle's point is that implementing the right programme (performing the
right symbol manipulations), even one that passes the Turing Test,
cannot amount to understanding because he himself could implement the
same programme without understanding.

That's not the Symbol Grounding Problem, it's the Chinese Room
Argument. The Symbol Grounding Problem is this: Symbol Systems (even
pen-pal symbol systems) mean something, are ABOUT something, only
because they are interpretable by external interpreters as being about
something. Our thoughts are not like that; they are about what they are
about on their own, without the help of an external interpreter. The
symbol grounding problem is: How can you ground symbols in what they
are about directly, so it doesn't just depend on the mediation of an
interpreter.

By the way, only the pen-pal Turing Test (T2) passed by a pure symbol
manipulator is vulnerable to the Chinese Room Argument. A robot passing
T3 is immune to it (because sensorimotor transduction, unlike
computation, is not implementation-independent): Searle can't implement
the robot completely, so the System Reply would be the correct answer
if Searle did only the computational part of it and said "Look, I'm
doing all this symbol manipulation but I don't see or hear or
understand anything I'm doing, so neither is the robot." The right
answer then would be: You don't see or hear or understand, but you're
just part of the robot; maybe the robot as a whole does!

> This seems reasonable to me (am I missing something?). You said the
> main objection to the Chinese Room argument is that the System as a
> whole 'understands' Chinese even if the English speaker in the room
> doesn't. This objection appears to be easily countered by the man
> internalising the rules and answering Chinese questions outside.

For T2, but not for T3, and that's because to pass T3 the symbols have
to be grounded in the capacity of the robot to interact with all those
things that its symbols are interpretable as being about; it's not just
interpretation anymore then.

> So, we move onto the symbol grounding problem as it is the only
> fundamental difference between biological machines/minds (whatever)
> and nonbiological 'built' machines. There seems to be a problem in
> grounding (giving meaning) to symbols in the latter as in us (and other
> animals) there are consequences to getting it right and wrong whereas
> programming a computer to care ('be careful or we'll unplug you...')
> doesn't seem to be the same. (I think this was the conclusion Denise
> came to in the seminar (?)).

Wait a bit. Searle is the one who says the only option the Chinese Room
Argument leaves you is the real brain. But that's not true. There are
other candidates for minds (and T3 passers) besides computers and
brains, and some of these might be nonbiological, "built" machines.

Yes, one of the features of having a mind is that things matter to you,
whereas nothing matters to rocks (or, since Searle's argument,
to computers); but you're slipping back into grandmotherliness if you
think there's something you know about the difference between "built"
machines and biological ones that rules out the possibility that a built
machine (say, a T3 robot) could have a mind, hence that things could
matter to it.

> Am I getting anywhere with this or am I a Grandmother in disguise?

We're all susceptible to Grandmother Arguments. You just voiced
a combination of (1) and (8).

(1) Computer [machine] only does what it's programmed [built] to do.
(8) Computers don't/can't have feelings [nothing matters to them].

By the way, there can be consequences of doing one thing or the other
without the need for anything mental. Consequences are a causal matter,
not a mental one. Coding for the right trait has the consequence, for a
gene, that its vehicle survives and reproduces. The mental question is
whether the ("selfish") gene WANTS its vehicle to survive and reproduce
(which of course it doesn't, because genes don't have minds).

> Also, where has all this left our definition of machine? Of mind?

A machine is still just a causal system, of which we and toasters are
examples. In this sense, every physical system, and perhaps every part
of every physical system, is a machine. Some machines are
"natural," some are "artificial" (because man-made -- though that's a
bit paradoxical, since people are natural, so surely everything they do
is natural, so in a sense all machines are natural; but it's safe to say
that some machines are and some machines are not man-made).
So far, we have made no inroads on what is or isn't a mind in defining
machines (causal systems), or even man-made machines.

What is a mind? Well in a sense we all know, because we all are minds;
but knowing what it is from experience is not the same as defining it,
and no one has a definition -- except that a system has a mind if it has
experiences, if there's something it's like to BE that system, if
there's someone home in there.

Here's one bit of progress, though: We can't be sure any system apart
from ourselves has a mind. But we can be pretty sure anyway. In general, if
I can't tell it apart in any way from someone with a mind, like me, I
assume it has a mind. That's how I know you have a mind; and that's why
I'm pretty sure a rock doesn't. And that's why I'm not sure about very
simple animals (I'm just as sure about mammals as I am about you;
almost as sure about vertebrates; and pretty sure about invertebrates
too: it hurts lobsters when you boil them. With jellyfish my intuitions
begin to break down. And I certainly HOPE plants don't feel anything or
there's nothing left for me to eat!)

Now the progress: A T3 robot could be you or me; we can't tell the
difference. And since we don't know anything more about anything, if
you're told (after a lifetime) that I was a robot, what have you learnt?
That now you're sure that if you kick me, I won't feel it, no matter how
I act?

So T3 really is the bottom line; maybe it's a good enough test, maybe
not. But one thing's sure: You'll never know better. If you think you
could, please tell me how you could ever tell whether or not a system
that you could not tell apart from a system with a mind, had a mind?



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:56 GMT