Granny Objections to Computers' Having Minds

From: HARNAD Stevan (harnad@cogsci.soton.ac.uk)
Date: Wed Mar 13 1996 - 16:13:02 GMT


Here are 11 Granny objections to the idea that cognition is computation.
There are probably more such objections, but many will be variants of
these. You are invited to support or rebut any of them, or any of the
replies to them.

(1) Computers only do what they're programmed to do.

[Answers: (a) A program is just a set of mechanical rules; many of our
traits are programmed in that sense, e.g., the DNA code in our genes.
(b) If a programmer writes a programme it doesn't mean he knew in
advance everything that it would/could do: A programmer could invent
the DNA code. (c) Programmes can be self-modifying, just as we are.
(d) Codes can be generated without a programmer (e.g., DNA).]

(2) Computers can't do anything new.

[Yes they can, for example, because of chance effects from outside the
computer (inputs), or from inside the computer, or pseudo-random
effects produced by the code itself, or through effects of the code
that the programmer did not expect, or through self-modification.]

(3) Computers can't be creative.

[Yes they can, see above; and if you mean REALLY creative, like
Einstein, most of us can't either.]

(4) Computers can't make mistakes.

[Yes they can; reply similar to reply about doing something new.]

(5) Computers are mechanical, we are flexible.

[Programmes can be extremely flexible, adapting to inputs, or to changes
in their own code; and if we adopt another scale, we're pretty
mechanical -- predictable, repetitive, limited -- too.]

(6) People have real-time histories; computers only have a pseudo-past.

[If you were duplicated, molecule-for-molecule, at this moment, your
double would not have a real history either: so what? It doesn't matter
whether the current state was reached through real time or otherwise:
If it's the right state, it's the right state.]

(7) Computers can't choose; they can only do what they are programmed to
do.

[See above, for freedom, flexibility, and error; besides, it's not clear
whether we can really choose either.]

(8) Computers don't/can't have feelings.

[Whether or not that's true is what this is all about; it cannot simply be
assumed to be true.]

(9) We're not mere machines.

[What's a machine? -- Till further notice, it is any system that operates
according to the causal laws of physics. And what are we?]

(10) I don't want to know how a computer does it, I want to know how *I*
do it.

[Till further notice, the clearest theory of how anyone or anything does
certain kinds of intelligent things is: through computation. So until a
better theory comes along, we have no basis for rejecting computation.]

There IS a version of (10) that is not a granny-objection, and that
is that computers can only do little BITS of what we can do. There are
many ways to do the little bits, so there's no point taking them
seriously. The answer to this is that it's correct, but if/when the
models start scaling up toward human capacity (as the Turing Test,
which we will discuss soon, dictates), this objection loses its force.

(11) Computers are isolated from the world; we are not.

[Computers can be as interactive with the world as their input/output
devices make them.]

A granny version of this objection, however, points to the symbol
grounding problem: The symbols in a computer are ungrounded; our brains
are not. We will return to this after we discuss Searle.
http://sable.ox.ac.uk/~popx/students/practicals/popbeast/node18.html



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:39 GMT