From: HARNAD Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Tue Mar 20 2001 - 13:37:23 GMT
On Sun, 25 Feb 2001, Hunt Catherine wrote:
> http://cogprints.soton.ac.uk/documents/disk0/00/00/03/19/index.html
>
> Hunt:
> This paper explores the relationship between cognition and computation.
> Chalmers explores in depth the idea that computers either can or could
> simulate the mind and its functions, or in other words - mentality.
Can we agree on an important distinction? "Simulate" is equivocal. We
know a computer can simulate just about anything (that's the
Church/Turing Thesis and Turing Equivalence). What we are asking about
is whether a computer can actually think, can actually have mental
states, just by implementing the right programme. It is about
IMPLEMENTING thinking, not SIMULATING thinking. We know a computer can
simulate, but not implement, flying; what about thinking?
> Hunt:
> I am left thinking that he is
> not entirely convinced of the relationship between cognition and
> computation. However, he does present some compelling arguments and is
> of the opinion that computation will be central to cognitive science in
> future years.
That sounds like Weak AI.
Strong AI would be: Run the right programme (the T2-passer) and the
system will think (understand, be intelligent, have a mind). Chalmers
will argue (half-heartedly, as you have noted) that a programme changes
the "causal structure" of a computer, so as to turn it into the causal
structure of any other machine. THIS is Chalmers's fatal weak point (in
my opinion). For no matter how a programme alters the causal structure
of a computer, making it "emulate" any other machine, it cannot make it
fly, if it is "emulating" an airplane. So the question becomes: "Is the
change in causal structure that is induced by running the T2-passing
programme sufficient to make a computer think? or is thinking more like
flying?"
> > CHALMERS:
> > thesis of computational sufficiency, stating that the
> > right kind of computational structure suffices for the possession
> > of a mind, and for the possession of a wide variety of mental
> > properties. Second, facilitating the progress of cognitive science
> > more generally there is a thesis of computational explanation,
> > stating that computation provides a general framework for the
> > explanation of cognitive processes and of behavior.
The first would be Strong AI (computationalism/cognitivism:
cognition = computation).
The second would be Weak AI (Church/Turing Thesis: The computer can
simulate anything, and so is a useful tool for modelling anything).
> Hunt:
> He introduces the concept of some
> kind of computational structure that has the ability to simulate the
> mind and its functions, and that computation itself can explain the
> mind and its functions
Do you see how "simulate" is equivocal here: We're asking whether just
running the right code can BE a a thinking mind, just as we ask whether
just running the right code can BE a flying plane (answer in the second
case: it cannot).
Computational explanation is just Weak AI: You can have a computational
model of a plane, which explains all the principles of flight, but
without actually be a plane, or flying. Nor does a computational
explanation of flying mean that flying is computation!
> > CHALMERS:
> > Some have [argued] that certain human abilities
> > could never be duplicated computationally...
> > or that even if a computation could duplicate human
> > abilities, instantiating the relevant computation would not suffice
> > for the possession of a mind
Again, too many equivocal words. We've already had "simulating", now we
have "duplicating" and "instantiating."
Kid-sib says the question is simple. I know what I mean by "thinking"
(I do it all the time). So would running the right computer programme
cause THAT to happen in a computer?
> > Others have questioned
> > the thesis of computational explanation, arguing that computation
> > provides an inappropriate framework for the explanation of
> > cognitive processes
This refers either to those who doubt the Church/Turing Thesis (that
just about anything can be simulated computationally) or who stress the
possible role of non-computational structures and processes in
implementing a mind (e.g., parallel processing, distributed processing,
neural nets, sensory transduction, motor transduction, analog
processing, implementation-DEpendent processing, etc.)
> Hunt:
> human mind is a complex entity that we do not fully understand
> ourselves yet... human beings are developing the
> computations in question, if we do not understand ourselves, then how
> can we develop computations that simulate the processes that are
> carried out in the mind?
Haven't we successfully understood and explained a lot of complex
things (including atoms and the universe)? We managed all that with our
minds; why shouldn't we be able to understand and explain our minds
too? ("Complexity" is not an argument.)
> Hunt:
> what is "some other technical notion"? [in place of computation]
> Perhaps the author could have given an example to clarify the
> argument to the reader.
I gave some noncomputational examples above. In general, if implementing
a mind was an implementation-DEpendent matter of getting the physics
right, then the mind would be just another dynamical system, like a
planet, an electron, an organism or an airplane, to be explained by
differential equations rather than computation.
> > CHALMERS:
> > Input and output vectors are always finite, but the internal state
> > vectors can be either finite or infinite. The finite case is
> > simpler... The infinite case can be spelled out in an analogous fashion,
> > however. The main complication is that restrictions have to be
> > placed on the vectors and dependency rules, so that these do not
> > encode an infinite amount of information. This is not too
> > difficult
This is the usual extension of the Church/Turing Thesis, which clearly
applies to any finite, discrete system, to the more general case of
continuous and infinite systems. It is done by discrete approximation
and by recursion, an approximation that can be made as tight as we wish
(and have the resources for).
This is just to rule out the worry that a computer cannot implement a
mind because a computer is a computer is just finite and discrete.
(Besides, just about everything, including the brain, is finite, and
discrete at some scale.)
And of course all those other noncomputational structures and processes
(analog processes, transduction, parallel processing) can all be
simulated by discrete serial computation, again as closely as you like
(within the limits of time and processing resources).
> > CHALMERS:
> > If even digestion is a computation, isn't this vacuous?
> > ...This objection rests on a
> > misunderstanding. It is true that any given instance of digestion
> > will implement some computation, as any physical system does, but
> > the system's implementing this computation is in general irrelevant
> > to its being an instance of digestion.
This is true. But it is not a "misunderstanding"! It is potentially
disastrous for computationalism. It shows that digestion is like flying:
it's NOT just the computation that matters. Trouble is, the very same
could be true of thinking too!
> Hunt:
> The author suggests that the actions carried out by the human body such
> as digestion, are computation.
I don't think that's what he says. I think he says that digestion can be
modelled computationally, but it is not computation. Nor is any movement
computation. (Hence a T3 robot is necessarily not just computational.)
But, as Cathy says, I think Chalmers equivocates on this. Not only is
thinking not the same as acting (passing T3 is a test of thinking, but
the acting is not itself the thinking), but inasmuch as thinking
involves any processes LIKE acting (or heating, secreting, or
digesting), it will be noncomputational. For such processes can only be
simulated, but not implemented, computationally (just like flying).
> > CHALMERS:
> > What about semantics? ... computations are
> > specified syntactically, not semantically.
> > ... the notion of semantic content is so ill-understood
> > that it desperately needs a foundation itself.
Bad news for the thesis that thinking = computation, because thoughts DO
have semantic content (meaning), and it is not just in the mind of the
external interpreter: It is somehow intrinsic to the system itself: the
mind's symbols are grounded. (How? That's the part that still
desperately needs work.)
> Hunt:
syntax uses arbitrary symbols that have semantics applied to them in
> order to follow rules that in turn produce some computation.
Not quite. The produce the computation purely syntactically. It is their
meaning, their interpretation, that comes from elsewhere. Give a
calculator two numbers to multiply, and it will do it for you without
needing to use the "meaning" of the numbers or of multiplication. But
you will have to interpret the result. To the computer it is just
meaningless squiggles and squoggles, generate on the base of a syntactic
rule (algorithm) operating only on the arbitrary shape of the symbols,
not their meaning.
> > CHALMERS:
> > What counts is that the brain implements various complex
> > computations, not that it is a computer.
This is a bit ambiguous, because to be a computer is to implement
computations!
> Hunt:
> although the brain does
> implement complex computations, it is a brain and not a computer, which
> is a point I feel, is important to the whole debate.
Yes, but what's the point of the point if all that's needed in order to
think is to implement those computations? For then a computer could do
it too, and then it would be thinking too.
> > CHALMERS:
> > [Some say] that the right kind of behavior is sufficient for
> > mentality. The Turing test is a weak foundation, however, and one
> > to which AI need not appeal. It may be that any behavioral
> > description can be implemented by systems lacking mentality
> > altogether.... two mentally distinct systems can have the same
> > behavioural dispositions. A computational basis for cognition will
> > require a tighter link than this, then.
Here Chalmers points out (correctly) that it is definitely NOT true that
Thinking = Acting. And that sounds like bad news for the Turing Test, as
well as for the thesis that Thinking = Computing, if the computing is
simply what generates the acting!
So this is a good reminder to those of you who think thinking is just
the functional capacity to pass T2 (or T3, etc.): That's just not
true. They may be correlated; one may be a reliable predictor of the
other (maybe); but they are not the same thing.
> > CHALMERS:
> > Instead, the central property of computation on which I will focus
> > is... the fact that a computation
> > provides an abstract specification of the causal organization of a
> > system. Causal organization is the nexus between computation and
> > cognition. If cognitive systems have their mental properties in
> > virtue of their causal organization, and if that causal
> > organization can be specified computationally, then the thesis of
> > computational sufficiency is established. Similarly, if it is the
> > causal organization of a system that is primarily relevant in the
> > explanation of behavior, then the thesis of computational
> > explanation will be established. By the account above, we will
> > always be able to provide a computational specification of the
> > relevant causal organization, and therefore of the properties on
> > which cognition rests.
Now let me say this over again in ordinary Kid-Sib English: A programme
determines what's going on inside a computer, what causes what inside
there. Let's call this the computer's "causal organization." Well, if
our thinking is the result of the causal organization of our brains,
then if a computer can get that same causal organization, by running the
right programme, then it can think too.
Causal organization? But what about the causal organization of an
airplane? Can a computer get that same causal organization by running
the right programme? It can SIMULATE it, to be sure, but is a simulation
the same CAUSAL organization? Or is it merely the causal organization of
a symbol system that is INTERPRETABLE as flying?
There's a big difference there. And what is true of flying can just as
well be true of thinking.
> Hunt:
> If a cognitive system has a
> causal organisation and this can be specified computationally, then
> this is enough to assume that computers can simulate mentality.
But we were talking about implementing it, not just simulating it. We
want the computer to really think, just as we want it to really fly (if
it really has the right "causal organization"): But it can't fly. So
why should we think it can think? Just because you can see it's not
flying but you can't see it's not thinking? (The only one who could see
whether or not it was thinking would be the computer itself! The
other/minds problem, in full bloom!)
> Hunt:
> Chalmers is claiming that
> the computation provides an abstract specification of the causal
> organization of the system. In my mind, if the specification is
> abstract it cannot be implemented by computation. Abstraction
> suggests that there is difference betwen the actual causal organisation
> and the computational causal organisation of the system being
> modelled.
You're almost right. The truth is, CAUSATION is not abstract, it is
concrete! A DESCRIPTION or SIMULATION of causation is abstract, and that
certainly CAN be implemented by computation, but the causation is not
real! It is merely symbols interpretable as if they were causing.
The computer implementing the abstract model of causation does itself
have a causal organization, but it's the WRONG one! It's the causal
organization of the hardware implementing that particular code, which
happens to be interpretable as the description of a causal system,
ANOTHER causal system.
Again, the plane is the best example. Surely the most concrete feature
of the causal structure of a plane is the power to cause itself to lift
off the ground. Does the computer simulating that causal structure
"abstractly" have that concrete causal power? No? Well then why would a
computer simulation of the causal structure of thinking have the
concrete causal power to think?
> > CHALMERS:
> > Most properties are not organizational invariants. The property of
> > flying is not, for instance: we can move an airplane to the ground
> > while preserving its causal topology, and it will no longer be
> > flying. Digestion is not: if we gradually replace the parts
> > involved in digestion with pieces of metal, while preserving causal
> > patterns, after a while it will no longer be an instance of
> > digestion: no food groups will be broken down, no energy will be
> > extracted, and so on. The property of being tube of toothpaste is
> > not an organizational invariant: if we deform the tube into a
> > sphere, or replace the toothpaste by peanut butter while preserving
> > causal topology, we no longer have a tube of toothpaste.
Chalmers is being very candid here, in conceding all this. So we have to
ask him directly: What grounds do we have for believing that thinking
too is not on this list of things that are NOT "organizational
invariants"? Surely not just the fact that no one (except the thinker)
can actually observe the thinking?...
> Hunt:
> The author carries on stepping through the notion of the causal
> topology of systems, which is the interaction of the parts of a system,
> and that this, in his view, is part of the link between computation and
> cognition. He looks at organisational invariants, which are described
> as properties that are invariant to the causal topology.
And the upshot is, if we simply DECLARE that thinking is such an
"invariant," then we can conclude it is just computation, and vice
versa. But that is the thesis that is on trial here. It cannot be found
not guilty by simply assuming it to be not guilty!
What are the REASONS for believing that thinking is unlike flying and
digestion in this critical respect? The fact that no one but the thinker
can EXPERIENCE thinking is not reason whatsoever. That is merely the
familiar other-minds problem.
>> > CHALMERS:
> > The central claim ... is that most mental properties
> > are organizational invariants. It does not matter how we stretch,
> > move about, or replace small parts of a cognitive system: as long
> > as we preserve its causal topology, we will preserve its mental
> > properties.
>
> Hunt:
> The author states that if you replace small parts of a cognitive
> system, as long as the causal topology is preserved the mental
> properties will remain the same. [What if we replaced]
> of a brain with microchips. Surely this
> means that the brain would be altered to the extent that it no longer
> functions as a brain and would no longer maintain mentality?
No one has the faintest idea how much and what bits of the brain could
or could not be replaced by microchips while leaving mental processes
intact. So no conclusions about whether or not cognition is an
"organizational invariant" can be made on the basis of this
non-knowledge! One thing can be said for sure: Sensorimotor transduction
can only be done by sensorimotor transducers (it's like flying and
digestion in that sense), and a lot of the brain does just that. So
those bits are not going to be replaceable by microchips. (They are
replaceble by synthetic transducers, but that's not computational
either.)
> Hunt:
> Chalmers states that there are two challenges for the argument of
> computation being able to simulate cognition, the first being that
> computation cannot do what cognition does, further suggesting that the
> causal structure of human cognition is too advanced for computation.
> The second being that even though computation may be able to capture
> the capacities of human cognition, but could not truly simulate
> mentality.
First, it's not just about "simulation." Second, there are reasons to
doubt that computation alone could pass T2; it definitely could not
pass T3. As to whether generating T2 power is enough to generate
thinking: Searle's argument shows a purely computational T2 passer
would not be enough. And we know that computation alone is not enough
to pass T3. As to uncertainty about whether a T3 passer is really
thinking: Turing's point is (or ought to be) that once you can't tell
the difference any more you may as well stop worrying about it.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html
> > CHALMERS:
> > But a computational model is just a simulation! According to this
> > objection, due to Searle (1980), Harnad (1989), and many others, we
> > do not expect a computer model of a hurricane to be a real
> > hurricane, so why should a computer model of mind be a real mind?
> > But this is to miss the important point about organizational
> > invariance. A computational simulation is not a mere formal
> > abstraction, but has rich internal dynamics of its own. If
> > appropriately designed it will share the causal topology of the
> > system that is being modeled, so that the system's organizationally
> > invariant properties will be not merely simulated but replicated.
Many words: "organizational invariance", "computational simulation,"
"formal abstraction," "internal dynamics", "causal topology",.
"replication."
Yet all we want to know is this: Will the computer running the
programme really be thinking? In other words, is thinking like moving
and digestion, or isn't it? Certainly Chalmers gives no reason to
believe it is not (apart from the fact that the only one who can tell
is the thinker).
> Hunt:
> as we do
> not really know how the mind works how would it be possible to
> replicate the causal topology of the system? If you cannot model the
> causal topology of the mind, then surely you cannot simulate mentality?
> However, if on the other hand it was possible to model the human mind
> computationally, then surely, if the model is computationally
> equivalent to a mind then it is a mind and not a computation? Chalmers
> carries on to agree with this argument.
You ask two questions. The answer is not the same: No, we don't know
how to either simulate or implement the mind now. But there is no
reason to believe we will not be able to do it eventually. That still
leaves the question of "computational equivalence" (yet ANOTHER word
for simulation?) and the answer is still: A simulation of thinking no
more thinks than a simulation of flying flies.
> > CHALMERS:
> > But artificial intelligence and computational cognitive science are
> > not committed to the claim that the brain is literally a Turing
> > machine with a moving head and a tape, and even less to the claim
> > that that tape is the environment. The claim is simply that some
> > computational framework can explain and replicate human cognitive
> > processes. It may turn out that the relevant computational
> > description of these processes is very fine-grained, reflecting
> > extremely complex causal dynamics among neurons, and it may well
> > turn out that there is significant variation in causal organization
> > between individuals. There is nothing here that is incompatible
> > with a computational approach to cognitive science.
This is all equivocation on what "explain and replicate" means. Weak AI
and the C/T Thesis already suggests that we'll be able to EXPLAIN
thinking computationally, because we can explain just about anything
computationally! The real question is: Can just running the right
computer programme turn a computer into a thinker.
> Hunt:
> To conclude and in question to this paper, I have had the question put
> to me - "Is intelligence more like the stuff a computer CAN be
> reconfigured to do, or like the stuff it can't be reconfigured to do
> (and why, and what's the difference)?"
>
> I think that intelligence lies within the person or people that are trying
> to reconfigure the computer, and the person or people that are having
> their intelligence matched by reconfiguration of a computer. The
> difference is that there is no difference. Intelligence of humans
> configuring a computer = configuration of a computer to match intelligence
> of humans.
Kid-Sib: I couldn't follow that at all! Was that supposed to be an
answer to the question?
Stevan Harnad
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:25 BST