Re: Chalmers on Computation

From: Pentland, Gary (
Date: Thu Mar 02 2000 - 13:35:45 GMT

> based on the idea that a system implements a computation
> if the causal structure of the system mirrors the formal structure
> of the computation.

Sounds fair to assume that...

> Advocates of computational cognitive science have done
> their best to repel these negative critiques, but the positive
> justification for the foundational theses remains murky at best.
> Why should computation, rather than some other technical notion,
> play this foundational role? And why should there be the intimate
> link between computation and cognition that the theses suppose? In
> this paper, I will develop a framework that can answer these
> questions and justify the two foundational theses.

Good introduction, but watch out for the contradiction later...

> What are the conditions under which a physical system
> implements a given computation? Searle (1990) has argued that
> there is no objective answer to this question, and that any given
> system can be seen to implement any computation if interpreted
> appropriately.

This sounds like rubbish to me... Interpretation should not be used to
argue that anything can do anything, it should be said that
interpretation depends on environment. And that environment plays a
significant part in that. On this basis I propose that computation is
flawed as it is supposed to be implementation independant, this does
not allow much space for environment to be considored.

> A physical system implements a given computation when
> there exists a grouping of physical states of the system into
> state-types and a one-to-one mapping from formal states of the
> computation to physical state-types, such that formal states
> related by an abstract state-transition relation are mapped onto
> physical state-types related by a corresponding causal
> state-transition relation.

This is a good definition, and explains a lot about the paper, however
I believe that this level of abstraction can be used to proove the

> Simple finite-state automata are unsatisfactory for many
> purposes, due to the monadic nature of their states. The states in
> most computational formalisms have a combinatorial structure: a
> cell pattern in a cellular automaton, a combination of tape-state
> and head-state in a Turing machine, variables and registers in a
> Pascal program, and so on. All this can be accommodated within the
> framework of combinatorial-state automata (CSAs), which differ from
> FSAs only in that an internal state is specified not by a monadic
> label S, but by a vector [S^1, S^2, S^3, ...]. The elements of this
> vector can be thought of as the components of the overall state,
> such as the cells in a cellular automaton or the tape-squares in a
> Turing machine. There are a finite number of possible values S_j^i
> for each element S^i, where S_j^i is the jth possible value for the
> ith element. These values can be thought of as "substates". Inputs
> and outputs can have a similar sort of complex structure: an input
> vector is [I^1,...,I^k], and so on. State-transition rules are
> determined by specifying, for each element of the state-vector, a
> function by which its new state depends on the old overall
> state-vector and input-vector, and the same for each element of the
> output-vector.

But Chalmers also states that...

> One might think that CSAs are not much of an advance on
> FSAs. Finite CSAs, at least, are no more computationally powerful
> than FSAs; there is a natural correspondence that associates every
> finite CSA with an FSA with the same input/output behavior.

And goes on to mention Turing machines as "infinite CSA's", which are
more powerful.

> This definition can straightforwardly be applied to
> yield implementation conditions for more specific computational
> formalisms. To develop an account of the implementation-conditions
> for a Turing machine, say, we need only redescribe the Turing
> machine as a CSA. The overall state of a Turing machine can be seen
> as a giant vector, consisting of (a) the internal state of the
> head, and (b) the state of each square of the tape, where this
> state in turn is an ordered pair of a symbol and a flag indicating
> whether the square is occupied by the head (of course only one
> square can be so occupied; this will be ensured by restrictions on
> initial state and on state-transition rules). The state-transition
> rules between vectors can be derived naturally from the quintuples
> specifying the behavior of the machine-head. As usually understood,
> Turing machines only take inputs at a single time-step (the start),
> and do not produce any output separate from the contents of the
> tape. These restrictions can be overridden in natural ways, for
> example by adding separate input and output tapes, but even with
> inputs and outputs limited in this way there is a natural
> description as a CSA. Given this translation from the Turing
> machine formalism to the CSA formalism, we can say that a given
> Turing machine is implemented whenever the corresponding CSA is
> implemented.

Sounds like a fair argument, but I believe that he is somehow missing
the point. CSA's he said are not that powerful, but he persists in
using them for interpretation, as he proposes that they offer a better
description of a system, so why does he feel the need to "rubbish"
Turing's work. Surely he should move on to answering the critics of

> Does every system implement some computation? Yes. For
> example, every physical system will implement the simple FSA with a
> single internal state; most physical systems will implement the
> 2-state cyclic FSA, and so on. This is no problem, and certainly
> does not render the account vacuous. That would only be the case if
> every system implemented every computation, and that is not the
> case. ..... There is no reason to suppose that the causal structure
> of an arbitrary system (such as Searle's wall) will satisfy these
> constraints. It is true that while we lack knowledge of the
> fundamental constituents of matter, it is impossible to prove that
> arbitrary objects do not implement every computation (perhaps every
> proton has an infinitely rich internal structure), but anybody who
> denies this conclusion will need to come up with a remarkably
> strong argument.

OK so every system does some computation.... but there is no system
that does EVERY computation, sounds fair. But he proposes that some
systems can implement many as a complex computation is merely a product
a many simpler ones.

> . It is true that any given instance of digestion will implement
> some computation, as any physical system
> does, but the system's implementing this computation is in general
> irrelevant to its being an instance of digestion. To see this, we
> can note that the same computation could have been implemented by
> various other physical systems (such as my SPARC) without it's
> being an instance of digestion. Therefore the fact that the system
> implements the computation is not responsible for the existence of
> digestion in the system.

What is he trying to say here, can his SPARC digest, or that it can't
but can simulate it... implementation doesn't matter... or that it
isn't actually digestion on his SPARC but a meaningless computation...
if this is the case surely his SPARC carrying out cognition
computations isn't actually cognition. Has he shot himself in the foot

> CHALMERS:> ...a certain class of computations such that any
> system implementing that computation is cognitive.

Is this the critical point of complexity argument again?

> ...any role that computation can play in providing a foundation for
> AI and cognitive science will be endangered,
> as the notion of semantic content is so ill-understood that it
> desperately needs a foundation itself.

Sounds like he is partially prepareing for defeat... after all isn't
computation semantically interpretable?

> In the words of Haugeland (1985), if you take care of
> the syntax, the semantics will take care of itself.

I don't like this argument, Chalmers just stated that semantics are
independant from syntax.

> The Turing test is a weak foundation, however, and one
> to which AI need not appeal. It may be that any behavioral
> description can be implemented by systems lacking mentality
> altogether (such as the giant lookup tables of Block 1981). Even
> if behavior suffices for mind, the demise of logical behaviorism
> has made it very implausible that it suffices for specific mental
> properties: two mentally distinct systems can have the same
> behavioral dispositions. A computational basis for cognition will
> require a tighter link than this, then.

He is right, a tighter link is required.

> Most properties are not organizational invariants. The
> property of flying is not, for instance: we can move an airplane to
> the ground while preserving its causal topology, and it will no
> longer be flying. Digestion is not: if we gradually replace the
> parts involved in digestion with pieces of metal, while preserving
> causal patterns, after a while it will no longer be an instance of
> digestion: no food groups will be broken down, no energy will be
> extracted, and so on. The property of being tube of toothpaste is
> not an organizational invariant: if we deform the tube into a
> sphere, or replace the toothpaste by peanut butter while preserving
> causal topology, we no longer have a tube of toothpaste.

> In general, most properties depend essentially on certain features
> that are not features of causal topology. Flying depends on
> height, digestion depends on a particular physiochemical makeup,
> tubes of toothpaste depend on shape and physiochemical makeup, and
> so on. Change the features in question enough and the property in
> question will change, even though causal topology might be
> preserved throughout.

If this is all true then AI has some problems, as state of mind must be
dependant on some environmental factors, hormones for example will
alter one's perception of an otherwise identical situation... compare
this to Chalmer's flying argument.

> Such properties include knowledge (if we move a system
> that knows that P into an environment where P is not true, then it
> will no longer know that P), and belief, on some construals where
> the content of a belief depends on environmental context. However,
> mental properties that depend only on internal (brain) state will
> be organizational invariants.

If I know gto be 9.8N, and I went to the moon where g isn't 9.8N then I
would still remember (know) 9.8N as g even though it would be
irrelavent and incorrect in the context of being on the moon.

> Phenomenal properties are more problematic. It seems
> unlikely that these can be defined by their causal roles (although
> many, including Lewis and Armstrong, think they might be). To be a
> conscious experience is not to perform some role, but to have a
> particular feel. These properties are characterized by what it is
> like to have them, in Nagel's (1974) phrase. Phenomenal properties
> are still quite mysterious and ill-understood.

Getting ready for problems again... Seems to gloss quickly over the
problems with his argument, not attempting to answer anything much. v
> To establish the thesis of computational sufficiency, all we need
> to do now is establish that organizational invariants are fixed by
> some computational structure. This is quite straightforward.

Seems ambitious to me.

> But a computational model is just a simulation!
> According to this objection, due to Searle (1980), Harnad (1989),
> and many others, we do not expect a computer model of a hurricane
> to be a real hurricane, so why should a computer model of mind be a
> real mind? But this is to miss the important point about
> organizational invariance. A computational simulation is not a mere
> formal abstraction, but has rich internal dynamics of its own. If
> appropriately designed it will share the causal topology of the
> system that is being modeled, so that the system's organizationally
> invariant properties will be not merely simulated but replicated.

But they aren't replicated.... It is a simulation and the product will
never in my view be complete.

> The Chinese room. There is not room here to deal with
> Searle's famous Chinese room argument in detail. I note, however,
> that the account I have given supports the "Systems reply",
> according to which the entire system understands Chinese even if
> the homunculus doing the simulating does not. Say the overall
> system is simulating a brain, neuron-by-neuron. Then like any
> implementation, it will share important causal organization with
> the brain. In particular, if there is a symbol for every neuron,
> then the patterns of interaction between slips of paper bearing
> those symbols will mirror patterns of interaction between neurons
> in the brain, and so on. This organization is implemented in a
> baroque way, but we should not let the baroqueness blind us to the
> fact that the causal organization - real, physical causal
> organization - is there. (The same goes for a simulation of
> cognition at level above the neural, in which the shared causal
> organization will lie at a coarser level.)

> It is precisely in virtue of this causal organization that the
> system possesses its mental properties. We can rerun a version of
> the "dancing qualia" argument to see this. In principle, we can
> move from the brain to the Chinese room simulation in small steps,
> replacing neurons at each step by little demons doing the same
> causal work, and then gradually cutting down labor by replacing two
> neighboring demons by one who does the same work. Eventually we
> arrive at a system where a single demon is responsible for
> maintaining the causal organization, without requiring any real
> neurons at all. This organization might be maintained between marks
> on paper, or it might even be present inside the demon's own head,
> if the calculations are memorized. The arguments about
> organizational invariance all hold here - for the same reasons as
> before, it is implausible to suppose that the system's experiences
> will change or disappear.

So will that demon be the simulation... If that demon is not
intelligent will it learn to be intelligent if it is simulating a
human, or is it bounded by it's own computational complexity?

> The full panoply of mental properties might only be
> determined by computation-plus-environment, just as it is
> determined by brain-plus-environment. These considerations do not
> count against the prospects of artificial intelligence, and they
> affect the aspirations of computational cognitive science no more
> than they affect the aspirations of neuroscience.

I dissagree... Neuroscience is the study of physical properties of the
brain, anything else attempts to understand it's function, Neuroscience
doesn't go above a celluler level.

> What about Gdel's theorem? Gdel's theorem states that
> for any consistent formal system, there are statements of
> arithmetic that are unprovable within the system. This has led some
> (Lucas 1963; Penrose 1989) to conclude that humans have abilities
> that cannot be duplicated by any computational system. For example,
> our ability to "see" the truth of the Gdel sentence of a formal
> system is argued to be non-algorithmic. I will not deal with this
> objection in detail here, as the answer to it is not a direct
> application of the current framework.

But isn't this one of the Chalmers has to answer?

> Discreteness and continuity. An important objection
> notes that the CSA formalism only captures discrete causal
> organization, and argues that some cognitive properties may depend
> on continuous aspects of that organization, such as analog values
> or chaotic dependencies...

> ...In this case, the specification of discrete state-transitions
> between states can be replaced by differential equations
> specifying how continuous quantities change in continuous time,
> giving a thoroughly continuous computational framework. MacLennan
> (1990) describes a framework along these lines. Whether such a
> framework truly qualifies as computational is largely a
> terminological matter, but there it is arguable that the framework
> is significantly similar in kind to the traditional approach; all
> that has changed is that discrete states and steps have been
> "smoothed out".

Doesn't this make it impossible implement? All hardware must have some
form of clock to enable some vauge form of symchronisation.

> Even so, it is implausible that the correct functioning
> of mental processes depends on the precise value of the tenth
> decimal place of analog quantities. The presence of background
> noise and randomness in biological systems implies that such
> precision would inevitably be "washed out" in practice. It follows
> that although a discrete simulation may not yield precisely the
> behavior that a given cognitive system produces on a given
> occasion, it will yield plausible behavior that the system might
> have produced had background noise been a little different.

Guesswork at best... does it mean that the noise is irrelavent, I
believe that noise etc. is important as the brain has less mass than is
needed to process the amount of information that the nervous system is
capable of carrying. Noise in the process of allocating resources and
selecting which data to process will have catastrophic affects on the

> In a similar way, a computationalist need not claim that
> the brain is a von Neumann machine, or has some other specific
> architecture. Like Turing machines, von Neumann machines are just
> one kind of architecture, particularly well-suited to
> programmability, but the claim that the brain implements such an
> architecture is far ahead of any empirical evidence and is most
> likely false. The commitments of computationalism are more
> general.

But does this make it too general to implement?

> Minimal computationalism is compatible with such diverse
> programs as connectionism, logicism, and approaches focusing on
> dynamic systems, evolution, and artificial life. It is occasionally
> said that programs such as connectionism are "noncomputational",
> but it seems more reasonable to say that the success of such
> programs would vindicate Turing's dream of a computational
> intelligence, rather than destroying it.

Pentland, Gary <>

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT