> CHALMERS:
> We cannot justify the foundational role of computation without first
> answering the question: What are the conditions under which a physical
> system implements a given computation? Searle (1990) has argued that there
> is no objective answer to this question, and that any given system can be
> seen to implement any computation if interpreted appropriately. He argues,
> for instance, that his wall can be seen to implement the Wordstar program.
> I will argue that there is no reason for such pessimism, and that
> objective conditions can be straightforwardly spelled out.
I would agree with Chalmers' criticism of Searle, although there is some
validity in arguing that whether a given system can be seen to implement a
given computation is a matter of interpretation. Searle could, then, argue
that anything that humans perceive can be interpreted in an infinite
number of ways, so nothing could be proved or disproved as the outcome
would depend on interpretation.
> CHALMERS:
> Justification of the thesis of computational sufficiency has usually been
> tenuous. Perhaps the most common move has been an appeal to the Turing
> test, noting that every implementation of a given computation will have a
> certain kind of behavior, and claiming that the right kind of behavior is
> sufficient for mentality. The Turing test is a weak foundation, however,
> and one to which AI need not appeal. It may be that any behavioral
> description can be implemented by systems lacking mentality altogether
> (such as the giant lookup tables of Block 1981). Even if behavior suffices
> for mind, the demise of logical behaviorism has made it very implausible
> that it suffices for specific mental properties: two mentally distinct
> systems can have the same behavioral dispositions. A computational basis
> for cognition will require a tighter link than this, then.
It is true that every implementation of a given computation will have
a certain kind of behaviour. It is also true that systems possessing
mentality will have a certain kind of behaviour. But the two cannot
necessarily be linked as they have been above. I would argue strongly
against behaviour being sufficient for possession of mind. Can
behaviour not be seen in plants? Plants can be seen to tilt towards a
source of light - but I would not say that plants are in possession of
mind.
> CHALMERS:
> Instead, the central property of computation on which I will focus is one
> that we have already noted: the fact that a computation provides an
> abstract specification of the causal organization of a system. Causal
> organization is the nexus between computation and cognition. If cognitive
> systems have their mental properties in virtue of their causal
> organization, and if that causal organization can be specified
> computationally, then the thesis of computational sufficiency is
> established. Similarly, if it is the causal organization of a system that
> is primarily relevant in the explanation of behavior, then the thesis of
> computational explanation will be established.
It is believable that cognitive systems have their mental properties in
virtue of their causal organization. But can the causal organization be
specified computationally? The claim is that a computation provides an
abstract specification of the causal organization of a system. If it
is abstract, then surely some aspects of the causal organization are
not going to be specified by the computation.
> CHALMERS:
> Call a property P an organizational invariant if it is invariant with
> respect to causal topology: that is, if any change to the system that
> preserves the causal topology preserves P. The sort of changes in question
> include: (a) moving the system in space; (b) stretching, distorting,
> expanding and contracting the system; (c) replacing sufficiently small
> parts of the system with parts that perform the same local function (e.g.
> replacing a neuron with a silicon chip with the same I/O properties); (d)
> replacing the causal links between parts of a system with other links that
> preserve the same pattern of dependencies (e.g., we might replace a
> mechanical link in a telephone exchange with an electrical link); and (e)
> any other changes that do not alter the pattern of causal interaction
> among parts of the system.
Can we be sure that any such changes are valid? A causal topology has
been described in the paper as representing "the abstract causal
organization of the system". In other words, it is "the pattern of
interaction among parts of the system". It "can be thought of as a
dynamic topology analogous to the static topology of a graph or
network". What if the interaction among parts of the system is time
dependent? By stretching, distorting, expanding or contracting the
system, this time dependence will probably be disturbed.
> CHALMERS:
> Most properties are not organizational invariants. The property of flying
> is not, for instance: we can move an airplane to the ground while
> preserving its causal topology, and it will no longer be flying. Digestion
> is not: if we gradually replace the parts involved in digestion with
> pieces of metal, while preserving causal patterns, after a while it will
> no longer be an instance of digestion: no food groups will be broken down,
> no energy will be extracted, and so on. The property of being tube of
> toothpaste is not an organizational invariant: if we deform the tube into
> a sphere, or replace the toothpaste by peanut butter while preserving
> causal topology, we no longer have a tube of toothpaste.
Could a similar argument not be put forward against mentality being an
organizational invariant? If we gradually replace the neurons in a
brain with silicon chips, while preserving causal patterns, will it
still perform as before? It would almost definitely be a very powerful
"computer", but would it still possess mentality?
> CHALMERS:
> In general, most properties depend essentially on certain features that
> are not features of causal topology. Flying depends on height, digestion
> depends on a particular physiochemical makeup, tubes of toothpaste depend
> on shape and physiochemical makeup, and so on. Change the features in
> question enough and the property in question will change, even though
> causal topology might be preserved throughout.
What does mentality depend on? Does mentality not depend on a
particular physiochemical make up, as with digestion?
> CHALMERS:
> The central claim of this section is that most mental properties are
> organizational invariants. It does not matter how we stretch, move about,
> or replace small parts of a cognitive system: as long as we preserve its
> causal topology, we will preserve its mental properties.
What has gone before applies again now - surely the electrical
impulses (in the brain) are heavily reliant on timing. The first
neuron or group of neurons to react to a certain stimulus will produce
impulses that spread to further neurons, which in turn will react. It
is probable, I believe, that the time it takes for impulses to travel
along different connections to different neurons will affect the order
in which neurons fire, and hence the reaction of the system.
> CHALMERS:
> An exception has to be made for properties that are partly supervenient on
> states of the environment. Such properties include knowledge (if we move a
> system that knows that P into an environment where P is not true, then it
> will no longer know that P), and belief, on some construals where the
> content of a belief depends on environmental context. However, mental
> properties that depend only on internal (brain) state will be
> organizational invariants. This is not to say that causal topology is
> irrelevant to knowledge and belief. It will still capture the internal
> contribution to those properties - that is, causal topology will
> contribute as much as the brain contributes. It is just that the
> environment will also play a role.
If a system that knows that P is moved into an environment where P is
not true, does the preceding claim that the system will just forget P?
Surely a system that truly possesses mentality will know that it knew
P, but will also know that P is no longer true?
> CHALMERS:
> Assume conscious experience is not organizationally invariant. Then
> there exist systems with the same causal topology but different
> conscious experiences. Let us say this is because the systems are made
> of different materials, such as neurons and silicon [...] Consider
> these [two] systems, N and S, which are identical except in that
> some circuit in one is neural and in the other is silicon.
>
> The key step in the thought-experiment is to take the relevant neural
> circuit in N, and to install alongside it a causally isomorphic silicon
> back-up circuit, with a switch between the two circuits. What happens when
> we flip the switch? By hypothesis, the system's conscious experiences will
> change [...]
>
> But given the assumptions, there is no way for the system to notice these
> changes. Its causal topology stays constant, so that all of its functional
> states and behavioral dispositions stay fixed. [...] We might even
> flip the switch a number of times, so that [...] experiences "dance"
> before the system's inner eye; it will never notice. This, I take
> it, is a reductio ad absurdum of the original hypothesis: if one's
> experiences change, one can potentially notice in a way that makes some
> causal difference. Therefore the original assumption is false, and
> phenomenal properties are organizational invariants.
If all that has been said until now can be taken as truth, then this
is a perfectly reasoned argument, and it is perfectly reasonable to
expect a causal difference to be seen when experiences change. However
this whole argument relies on the fact that the two circuits are
functionally identical, and I haven't accepted that this will be the
case after the changes (replacement of neurons with silicon) have been
made.
> CHALMERS:
> If all this works, it establishes that most mental properties are
> organizational invariants: any two systems that share their fine-grained
> causal topology will share their mental properties, modulo the
> contribution of the environment.
Having not accepted the argument put forward above, I have to argue
that most mental properties are not organizational invariants, and
further that any two systems that share causal topology will not share
their mental properties.
> CHALMERS:
> To establish the thesis of computational sufficiency, all we need to do
> now is establish that organizational invariants are fixed by some
> computational structure. This is quite straightforward.
>
> An organizationally invariant property depends only on some pattern of
> causal interaction between parts of the system. Given such a pattern, we
> can straightforwardly abstract it into a CSA description: the parts of the
> system will correspond to elements of the CSA state-vector, and the
> patterns of interaction will be expressed in the state-transition rules.
> [...] Any system that implements this CSA will share the causal
> topology of the original system. [...]
>
> If what has gone before is correct, this establishes the thesis of
> computational sufficiency, and therefore the the view that Searle has
> called "strong artificial intelligence": that there exists some
> computation such that any implementation of the computation possesses
> mentality. The fine-grained causal topology of a brain can be specified as
> a CSA. Any implementation of that CSA will share that causal topology, and
> therefore will share organizationally invariant mental properties that
> arise from the brain.
This argument relies on the fact that mentality is an organizational
invariant, and also relies on implementation independence. I have not
accepted either of these facts, and have argued against them. Both of
these are linked to my belief that there is a lot more to mentality
than the discrete functionality of a system that possesses it. A
system which possesses mentality cannot be expressed as a set of
discrete states and transitions between them. I believe that there is
a time dependence that cannot be captured by this representation.
> CHALMERS:
> A computational basis for cognition can be challenged in two ways. The
> first sort of challenge argues that computation cannot do what cognition
> does: that a computational simulation might not even reproduce human
> behavioral capacities, for instance, perhaps because the causal structure
> in human cognition goes beyond what a computational description can
> provide. The second concedes that computation might capture the
> capacities, but argues that more is required for true mentality.
I have said that I don't believe a system in possession of mentality
can be captured by a discrete specification, due to time dependence.
Time dependence can be captured in a discrete system, to an ever
increasing level of accuracy, so my argument may come down to whether
we will ever be able to describe a brain in such a way that the way in
which all of the neurons react is known. I will argue for the first
sort of challenge given above, as I believe however accurate a
discrete system can get, it will never be accurate enough.
> CHALMERS:
> The question about whether a computational model simulates or replicates a
> given property comes down to the question of whether or not the property
> is an organizational invariant. The property of being a hurricane is
> obviously not an organizational invariant, for instance, as it is
> essential to the very notion of hurricanehood that wind and air be
> involved. The same goes for properties such as digestion and temperature,
> for which specific physical elements play a defining role. There is no
> such obvious objection to the organizational invariance of cognition, so
> the cases are disanalogous, and indeed, I have argued above that for
> mental properties, organizational invariance actually holds. It follows
> that a model that is computationally equivalent to a mind will itself be a
> mind.
There is no obvious objection to the organizational invariance of
cognition, but I still have an objection, as I have expressed earlier.
> CHALMERS:
> The Chinese room. There is not room here to deal with Searle's famous
> Chinese room argument in detail. I note, however, that the account I have
> given supports the "Systems reply", according to which the entire system
> understands Chinese even if the homunculus doing the simulating does not.
> Say the overall system is simulating a brain, neuron-by-neuron. Then like
> any implementation, it will share important causal organization with the
> brain. In particular, if there is a symbol for every neuron, then the
> patterns of interaction between slips of paper bearing those symbols will
> mirror patterns of interaction between neurons in the brain, and so on.
Suppose that there is a precise time dependence between the neurons in
the brain. The system described above could simulate a brain,
neuron-by-neuron, just much slower - if we slow down the operation of
the brain universally, then it is conceivable that the time dependence
will not be sacrificed. The system description is still discrete
however, and hence I would argue that the patterns of interaction
between the slips of paper would not mirror patterns of interaction
between neurons in the brain.
> CHALMERS:
> We have every reason to believe that the low-level laws of physics
> are computable. If so, then low-level neurophysiological processes
> can be computationally simulated; it follows that the function of
> the whole brain is computable too, as the brain consists in a
> network of neurophysiological parts. Some have disputed the premise:
> for example, Penrose (1989) has speculated that the effects of
> quantum gravity are noncomputable, and that these effects may play a
> role in cognitive functioning.
It could be that the low-level laws of physics are not computable for
the very same reason that I have argued for mentality not being
computable. It is reasonable to believe that the effects of quantum
gravity play a role in cognitive functioning, as cognitive functioning
involves movement of electrons in the brain.
> CHALMERS:
> There are good reasons to suppose that whether or not cognition in
> the brain is continuous, a discrete framework can capture everything
> important that is going on. To see this, we can note that a discrete
> abstraction can describe and simulate a continuous process to any
> required degree of accuracy. It might be objected that chaotic
> processes can amplify microscopic differences to significant levels.
> Even so, it is implausible that the correct functioning of mental
> processes depends on the precise value of the tenth decimal place of
> analog quantities. The presence of background noise and randomness
> in biological systems implies that such precision would inevitably
> be "washed out" in practice. It follows that although a discrete
> simulation may not yield precisely the behavior that a given
> cognitive system produces on a given occasion, it will yield
> plausible behavior that the system might have produced had
> background noise been a little different. This is all that a
> proponent of artificial intelligence need claim.
I accept that a discrete simulation would be accurate to a certain
degree, but due to the high connectivity of neurons in the brain,
small differences could cause large differences as reactions spread
from neuron to neuron. The argument that background noise could "wash
out" precision in practice is a good one, and I don't have an argument
against it. I do object to Charmers talking of randomness, directly
after he talked of chaotic behaviour, and I don't believe that
randomness has a place in biological systems, or in any system for that
matter.
> CHALMERS:
> It follows that these considerations do not count against the theses of
> computational sufficiency or of computational explanation. To see the
> first, note that a discrete simulation can replicate everything essential
> to cognitive functioning, for the reasons above, even though it may not
> duplicate every last detail of a given episode of cognition.
> To see the second, note that for similar reasons the precise values
> of analog quantities cannot be relevant to the explanation of our
> cognitive capacities, and that a discrete description can do the
> job.
So the argument here is that, although a system cannot reproduce the
exact operation of a given brain, the operation that it does perform
is still cognitive. I have said that I believe that the precise time
dependence in a brain is important, and it is fair to say that a
discrete system trying to implement cognition would have it's own
precise time dependence. Would this then constitute cognition? I don't
know...
Steve
sjlb197@soton.ac.uk
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT