Chalmers: Computational Foundation

From: Hunt Catherine (chh398@ecs.soton.ac.uk)
Date: Sun Feb 25 2001 - 21:03:01 GMT


http://cogprints.soton.ac.uk/documents/disk0/00/00/03/19/index.html

Hunt:
This paper explores the relationship between cognition and computation.
Chalmers explores in depth the idea that computers either can or could
simulate the mind and its functions, or in other words - mentality.

Hunt:
He admits that the argument is a controversial one, and even states
that previous arguments in favour of the subject have been less than
successful. In answer to this, he tries to justify the role of
computation in relation to cognition and artificial intelligence.
Although he tries to win the argument, I am left thinking that he is
not entirely convinced of the relationship between cognition and
computation. However, he does present some compelling arguments and is
of the opinion that computation will be central to cognitive science in
future years.

> CHALMERS:
> Perhaps no concept is more central to the foundations of modern
> cognitive science than that of computation. The ambitions of
> artificial intelligence rest on a computational framework, and in
> other areas of cognitive science, models of cognitive processes are
> most frequently cast in computational terms. The foundational role
> of computation can be expressed in two basic theses. First,
> underlying the belief in the possibility of artificial intelligence
> there is a thesis of computational sufficiency, stating that the
> right kind of computational structure suffices for the possession
> of a mind, and for the possession of a wide variety of mental
> properties. Second, facilitating the progress of cognitive science
> more generally there is a thesis of computational explanation,
> stating that computation provides a general framework for the
> explanation of cognitive processes and of behavior.

Hunt:
Here, Chalmers states that the more modern view of cognitive science
uses the idea of computation as a central focus when encompassing the
realms of artificial intelligence. He introduces the concept of some
kind of computational structure that has the ability to simulate the
mind and its functions, and that computation itself can explain the
mind and its functions and a wide variety of mental properties. He also
suggests that computation can also provide an explanation of these
factors.

> CHALMERS:
> These theses are widely held within cognitive science, but they are
> quite controversial. Some have questioned the thesis of
> computational sufficiency, arguing that certain human abilities
> could never be duplicated computationally (Dreyfus 1974; Penrose
> 1989), or that even if a computation could duplicate human
> abilities, instantiating the relevant computation would not suffice
> for the possession of a mind (Searle 1980). Others have questioned
> the thesis of computational explanation, arguing that computation
> provides an inappropriate framework for the explanation of
> cognitive processes (Edelman 1989; Gibson 1979), or even that
> computational descriptions of a system are vacuous (Searle 1990,
> 1991).

Hunt:
Central to the argument of computers simulating the mind is the fact
that others think that it is impossible to duplicate human ability with
computation alone and further impossible to duplicate what goes on in
our minds. I personally agree with this argument, remembering that the
human mind is a complex entity that we do not fully understand
ourselves yet. Remembering that human beings are developing the
computations in question, if we do not understand ourselves, then how
can we develop computations that simulate the processes that are
carried out in the mind?

> CHALMERS:
> Advocates of computational cognitive science have done their best
> to repel these negative critiques, but the positive justification
> for the foundational theses remains murky at best. Why should
> computation, rather than some other technical notion, play this
> foundational role? And why should there be the intimate link
> between computation and cognition that the theses suppose? In this
> paper, I will develop a framework that can answer these questions
> and justify the two foundational theses.

Hunt:
The author states that those in favour of computational cognitive
science have indeed tried to provide a justification for their views,
but suggests that they have not been very successful in doing so in the
past. He suggests that he can do a better job by developing a framework
that answers the questions of why it is computation that is part of the
relationship and not some other technical notion, and why exactly there
is a link between computation and cognition in the first place.

Hunt:
One question that I would like to ask is, what is "some other technical
notion"? Perhaps the author could have given an example to clarify the
argument to the reader.

> CHALMERS:
> Input and output vectors are always finite, but the internal state
> vectors can be either finite or infinite. The finite case is
> simpler, and is all that is required for any practical purposes.
> Even if we are dealing with Turing machines, a Turing machine with
> a tape limited to 10^{200} squares will certainly be all that is
> required for simulation or emulation within cognitive science and
> AI. The infinite case can be spelled out in an analogous fashion,
> however. The main complication is that restrictions have to be
> placed on the vectors and dependency rules, so that these do not
> encode an infinite amount of information. This is not too
> difficult, but I will not go into details here.

Hunt:
The author carries on by walking through the concept of Finite State
Automata and Combinatorial State Automata in order to specify the class
of computations that are implemented by a physical system. There is one
point in the explanation that is probably of no great importance, but I
would be interested to know where the 10^{200} figure comes from. There
is no reference to it.

> CHALMERS:
> If even digestion is a computation, isn't this vacuous? This
> objection expresses the feeling that if every process, including
> such things as digestion and oxidation, implements some
> computation, then there seems to be nothing special about cognition
> any more, as computation is so pervasive. This objection rests on a
> misunderstanding. It is true that any given instance of digestion
> will implement some computation, as any physical system does, but
> the system's implementing this computation is in general irrelevant
> to its being an instance of digestion. To see this, we can note
> that the same computation could have been implemented by various
> other physical systems (such as my SPARC) without it's being an
> instance of digestion. Therefore the fact that the system
> implements the computation is not responsible for the existence of
> digestion in the system.

Hunt:
The author suggests that the actions carried out by the human body such
as digestion, are computation. However, this makes cognition just
another action, rendering it unimportant in the argument. But, it is
argued that this is a misunderstanding. The misunderstanding in the
author's view lies in the fact that he thinks that digestion can be
implemented by any other physical system, making the body unimportant
in regards to the existence of digestion. I find this a strange view.
Surely the digestive system cannot be divorced from the rest of the
system it is a part of? If food reaches the digestive system, it can
only have reached its destination by being placed in the mouth,
followed by a chewing action, followed by a swallowing action, followed
by peristalsis etc. I do not see how you can separate these from
digestion. There would be no digestion without the prior actions.

> CHALMERS:
> What about semantics? It will be noted that nothing in my account
> of computation and implementation invokes any semantic
> considerations, such as the representational content of internal
> states. This is precisely as it should be: computations are
> specified syntactically, not semantically. Although it may very
> well be the case that any implementations of a given computation
> share some kind of semantic content, this should be a consequence
> of an account of computation and implementation, rather than built
> into the definition. If we build semantic considerations into the
> conditions for implementation, any role that computation can play
> in providing a foundation for AI and cognitive science will be
> endangered, as the notion of semantic content is so ill-understood
> that it desperately needs a foundation itself.

Hunt:
The author argues that computations are based on syntax and not
semantics, and that the semantic content is just a consequence of the
computation. Also, the notion of this semantic content is not fully
understood, making role of computation in artificial intelligence
limited. But, surely the semantics of the computation are central to a
system that is produced to simulate something? My understanding is that
syntax uses arbitrary symbols that have semantics applied to them in
order to follow rules that in turn produce some computation.

> CHALMERS:
> Is the brain a computer in this sense? Arguably. For a start, the
> brain can be "programmed" to implement various computations by the
> laborious means of conscious serial rule-following; but this is a
> fairly incidental ability. On a different level, it might be argued
> that learning provides a certain kind of programmability and
> parameter-setting, but this is a sufficiently indirect kind of
> parameter-setting that it might be argued that it does not qualify.
> In any case, the question is quite unimportant for our purposes.
> What counts is that the brain implements various complex
> computations, not that it is a computer.

Hunt:
I whole-heartedly agree with Chalmers on his point here that a brain
can be programmed to a certain extent, and that it is done by learning
according to rules. He also states that although the brain does
implement complex computations, it is a brain and not a computer, which
is a point I feel, is important to the whole debate.

> CHALMERS:
> Justification of the thesis of computational sufficiency has
> usually been tenuous. Perhaps the most common move has been an
> appeal to the Turing test, noting that every implementation of a
> given computation will have a certain kind of behavior, and
> claiming that the right kind of behavior is sufficient for
> mentality. The Turing test is a weak foundation, however, and one
> to which AI need not appeal. It may be that any behavioral
> description can be implemented by systems lacking mentality
> altogether (such as the giant lookup tables of Block 1981). Even
> if behaviour suffices for mind, the demise of logical behaviorism
> has made it very implausible that it suffices for specific mental
> properties: two mentally distinct systems can have the same
> behavioural dispositions. A computational basis for cognition will
> require a tighter link than this, then.

Hunt:
The author is claiming that every implementation of a computation will
have behavior, and that any system that has some sort of mentality will
have behavior. He argues that if every implementation of a computation
has behavior, then if it is the correct behavior that it is sufficient
enough for mentality. I think that Chalmers is arguing that if
behaviour is described by systems as not having any mental ability and
just looking up information from tables, then when it comes to the
instance of behaving in a logical fashion, then this is not enough to
make the system plausible. The way in which this paragraph was worded
left me slightly confused and unsure that I understood the stance the
author was trying to take.

> CHALMERS:
> Instead, the central property of computation on which I will focus
> is one that we have already noted: the fact that a computation
> provides an abstract specification of the causal organization of a
> system. Causal organization is the nexus between computation and
> cognition. If cognitive systems have their mental properties in
> virtue of their causal organization, and if that causal
> organization can be specified computationally, then the thesis of
> computational sufficiency is established. Similarly, if it is the
> causal organization of a system that is primarily relevant in the
> explanation of behavior, then the thesis of computational
> explanation will be established. By the account above, we will
> always be able to provide a computational specification of the
> relevant causal organization, and therefore of the properties on
> which cognition rests.

Hunt:
The author states that there is a link between the causal organization
of a system and computation and cognition. If a cognitive system has a
causal organisation and this can be specified computationally, then
this is enough to assume that computers can simulate mentality.
However, I do not agree with this argument. Chalmers is claiming that
the computation provides an abstract specification of the causal
organization of the system. In my mind, if the specification is
abstract it cannot be implemented by computation. Abstratraction
suggests that there is difference betwen the actual causal organisation
and the computational causal organisation of the system being
modelled.

> CHALMERS:
> Most properties are not organizational invariants. The property of
> flying is not, for instance: we can move an airplane to the ground
> while preserving its causal topology, and it will no longer be
> flying. Digestion is not: if we gradually replace the parts
> involved in digestion with pieces of metal, while preserving causal
> patterns, after a while it will no longer be an instance of
> digestion: no food groups will be broken down, no energy will be
> extracted, and so on. The property of being tube of toothpaste is
> not an organizational invariant: if we deform the tube into a
> sphere, or replace the toothpaste by peanut butter while preserving
> causal topology, we no longer have a tube of toothpaste.

Hunt:
The author carries on stepping through the notion of the causal
topology of systems, which is the interaction of the parts of a system,
and that this, in his view, is part of the link between computation and
cognition. He looks at organisational invariants, which are described
as properties that are invariant to the causal topology. I cannot
understand why he has not drawn a parallel to mentality. If parts of
the brain were replaced with microchips, then surely it no longer would
possess mentality? This would make it an organizational invariant.

> CHALMERS:
> In general, most properties depend essentially on certain features
> that are not features of causal topology. Flying depends on height,
> digestion depends on a particular physiochemical makeup, tubes of
> toothpaste depend on shape and physiochemical makeup, and so on.
> Change the features in question enough and the property in question
> will change, even though causal topology might be preserved
> throughout.

Hunt:
Again, the author makes no reference to mentality. Surely this is
central to the argument and parallels should be drawn, or
contradictions pointed out?

> CHALMERS:
> The central claim of this section is that most mental properties
> are organizational invariants. It does not matter how we stretch,
> move about, or replace small parts of a cognitive system: as long
> as we preserve its causal topology, we will preserve its mental
> properties.

Hunt:
The author states that if you replace small parts of a cognitive
system, as long as the causal topology is preserved the mental
properties will remain the same. I iterate the argument that I made
earlier about replacing parts of a brain with microchips. Surely this
means that the brain would be altered to the extent that it no longer
functions as a brain and would no longer maintain mentality?

> CHALMERS:
> A computational basis for cognition can be challenged in two ways.
> The first sort of challenge argues that computation cannot do what
> cognition does: that a computational simulation might not even
> reproduce human behavioral capacities, for instance, perhaps
> because the causal structure in human cognition goes beyond what a
> computational description can provide. The second concedes that
> computation might capture the capacities, but argues that more is
> required for true mentality.

Hunt:
Chalmers states that there are two challenges for the argument of
computation being able to simulate cognition, the first being that
computation cannot so what cognition does, further suggesting that the
causal structure of human cognition is too advanced for computation.
The second being that even though computation may be able to capture
the capacities of human cognition, but could not truly simulate
mentality. Again, I feel that Chalmers is questioning his belief that
computers either can or could simulate the mind and its functions. I do
not personally believe that mentality can be simulated by computation
no matter how accurate it is, and agree with the first challenge that
he makes.

> CHALMERS:
> But a computational model is just a simulation! According to this
> objection, due to Searle (1980), Harnad (1989), and many others, we
> do not expect a computer model of a hurricane to be a real
> hurricane, so why should a computer model of mind be a real mind?
> But this is to miss the important point about organizational
> invariance. A computational simulation is not a mere formal
> abstraction, but has rich internal dynamics of its own. If
> appropriately designed it will share the causal topology of the
> system that is being modeled, so that the system's organizationally
> invariant properties will be not merely simulated but replicated.

Hunt:
I agree with Searle and Harnad. Let us not forget that it would be
possible to simulate something like a plane. You can make a person
think that they are flying in a plane, computationally, but so not
forget that you would not actually be flying. With the human mind, the
computer model of the mind would not really be a mind. But, as we do
not really know how the mind works how would it be possible to
replicate the causal topology of the system? If you cannot model the
causal topology of the mind, then surely you cannot simulate mentality?
However, if on the other hand it was possible to model the human mind
computationally, then surely, if the model is computationally
equivalent to a mind then it is a mind and not a computation? Chalmers
carries on to agree with this argument.

> CHALMERS:
> What about the environment? Some mental properties, such as
> knowledge and even belief, depend on the environment being a
> certain way. Computational organization, as I have outlined it,
> cannot determine the environmental contribution, and therefore
> cannot fully guarantee this sort of mental property. But this is no
> problem. All we need computational organization to give us is the
> internal contribution to mental properties: that is, the same
> contribution that the brain makes (for instance, computational
> organization will determine the so-called "narrow content" of a
> belief, if this exists; see Fodor 1987). The full panoply of mental
> properties might only be determined by
> computation-plus-environment, just as it is determined by
> brain-plus-environment. These considerations do not count against
> the prospects of artificial intelligence, and they affect the
> aspirations of computational cognitive science no more than they
> affect the aspirations of neuroscience.

Hunt:
The author asserts that there is no problem in the computational
modelling of mentality in regards to the environment. I do not believe
that you could create a simulation of mentality that would react in the
ways that humans do. I refer to my argument that the human mind is too
complex to model as we do not fully understand it ourselves, and I
think that it would be impossible to model emotions that are related to
the environment as we are complex beings that react in random ways to
many stimuli. For instance, it would be easy to model the reaction to
heat: boiling water in contact with the skin = pain. But some people
claim to get pleasure from pain, and how could you know to model this
correctly? What is right and what is wrong when you are trying to model
mentality? Some might argue that there is no wrong, but I would imagine
rules would have to be followed so there would have to be a decision
made as to what is right and wrong to make these rules. How do you do
make the rules? If you can make the rules then I refer to my prior
argument that if the model is computationally equivalent to a mind then
it is a mind and not a computation.

> CHALMERS:
> But artificial intelligence and computational cognitive science are
> not committed to the claim that the brain is literally a Turing
> machine with a moving head and a tape, and even less to the claim
> that that tape is the environment. The claim is simply that some
> computational framework can explain and replicate human cognitive
> processes. It may turn out that the relevant computational
> description of these processes is very fine-grained, reflecting
> extremely complex causal dynamics among neurons, and it may well
> turn out that there is significant variation in causal organization
> between individuals. There is nothing here that is incompatible
> with a computational approach to cognitive science.

Hunt:
To a certain extent I agree with Chalmers on his point that there is
nothing in his argument that is incompatible with a computational
approach to cognitive science. Indeed, in my view, it may be possible
to explain and replicate human cognitive processes at some stage in the
future. Research and findings are being carried out and produced
respectively, at an astounding rate and may find answers to my
questions in regards to Chalmers' arguments. But at this stage in time
we do not
understand the complete set of processes to do with human cognition,
rendering, in my view, his argument invalid at this present point in time.

Hunt:
To conclude and in question to this paper, I have had the question put
to me - "Is intelligence more like the stuff a computer CAN be
reconfigured to do, or like the stuff it can't be reconfigured to do
(and why, and what's the difference)?"

Hunt:
I think that intelligence lies within the person or people that are trying
to reconfigure the computer, and the person or people that are having
their intelligence matched by reconfiguration of a computer. The
difference is that there is no difference. Intelligence of humans
configuring a computer = configuration of a computer to match intelligence
of humans.

Catherine Hunt < >



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:17 BST