[The following text is in the "iso-8859-1" character set]
[Your display is set for the "US-ASCII" character set]
[Some characters may be displayed incorrectly]
> > CHALMERS:
> > What are the conditions under which a physical system
> > implements a given computation? Searle (1990) has argued that
> > there is no objective answer to this question, and that any given
> > system can be seen to implement any computation if interpreted
> > appropriately.
> Pentland:
> This sounds like rubbish to me... Interpretation should not be used to
> argue that anything can do anything, it should be said that
For the same reason that Solipsism "shouldn't" be used?
> Pentland:
> interpretation depends on environment. And that environment plays a
> significant part in that. On this basis I propose that computation is
> flawed as it is supposed to be implementation independant, this does
> not allow much space for environment to be considored.
I don't understand. If the interpretation of a system, when deciding
whether it implements a computation or not, depends on the environment,
isn't this a factor of the implementation. Why is implementation
independant computation flawed for not including an implementation
dependacy.
> > CHALMERS:
> > One might think that CSAs are not much of an advance on
> > FSAs. Finite CSAs, at least, are no more computationally powerful
> > than FSAs; there is a natural correspondence that associates every
> > finite CSA with an FSA with the same input/output behavior.
> Pentland:
> And goes on to mention Turing machines as "infinite CSA's", which are
> more powerful.
>CHALMERS:
>Of course infinite CSAs (such as Turing machines) are more powerful,
> but even leaving that reason aside, there are a number of reasons why
> CSAs are a more suitable formalism for our purposes than FSAs.
> Pentland:
> CSA's he said are not that powerful, but he persists in
> using them for interpretation, as he proposes that they offer a better
> description of a system, so why does he feel the need to "rubbish"
> Turing's work. Surely he should move on to answering the critics of
> computationalism.
Where does he "rubbish" Turing's work? He doesn't say that they are less
powerful that FSA, but that they aren't more powerful. It seems that he is
unsing the formalism of his combinatorial-state automata because it is
easier to describe what he wants in. He's just trying to "use the right
tools for the job"
> > CHALMERS:
> > It is true that any given instance of digestion will implement
> > some computation, as any physical system
> > does, but the system's implementing this computation is in general
> > irrelevant to its being an instance of digestion. To see this, we
> > can note that the same computation could have been implemented by
> > various other physical systems (such as my SPARC) without it's
> > being an instance of digestion. Therefore the fact that the system
> > implements the computation is not responsible for the existence of
> > digestion in the system.
> Pentland
> What is he trying to say here, can his SPARC digest, or that it can't
> but can simulate it... implementation doesn't matter... or that it
> isn't actually digestion on his SPARC but a meaningless computation...
> if this is the case surely his SPARC carrying out cognition
> computations isn't actually cognition. Has he shot himself in the foot
> here?
I'd go with door number four, that whatever his SPARC is doing isn't
digestion. It seems that to Chalmers digestion isn't important, a hidden
computation behind it is, and that it is possible for his SPARC to implement
this computation, though not in exactly the same way as digestion does.
Chalmers then argues that all cognitive systems implement a computation and
that ALL implementations of the computation are cognitive. Chalmers then
says if this isn't true
> CHALMERS:
> ...the computational status of cognition would be analagous to that of
> digestion.
That only the mind is cognitive.
> > CHALMERS:
> > ...any role that computation can play in providing a foundation for
> > AI and cognitive science will be endangered,
> > as the notion of semantic content is so ill-understood that it
> > desperately needs a foundation itself.
> Pentland:
> Sounds like he is partially prepareing for defeat... after all isn't
> computation semantically interpretable?
Yes, but Chalmers argues that the definition of a computation is syntactical
and that the semantic meaning will arrive in an implemenation of the
computation.
> > CHALMERS:
> > In the words of Haugeland (1985), if you take care of
> > the syntax, the semantics will take care of itself.
> Pentland:
> I don't like this argument, Chalmers just stated that semantics are
> independant from syntax.
Indeed. He is attempting to set us up where he declares
>CHALMERS:
>Causal organistation is the nexus between computation and cognition
And that it is what draws semantics and syntax together.
> > CHALMERS:
> > Most properties are not organizational invariants. The
> > property of flying is not, for instance: we can move an airplane to
> > the ground while preserving its causal topology, and it will no
> > longer be flying. Digestion is not: if we gradually replace the
> > parts involved in digestion with pieces of metal, while preserving
> > causal patterns, after a while it will no longer be an instance of
> > digestion: no food groups will be broken down, no energy will be
> > extracted, and so on. The property of being tube of toothpaste is
> > not an organizational invariant: if we deform the tube into a
> > sphere, or replace the toothpaste by peanut butter while preserving
> > causal topology, we no longer have a tube of toothpaste.
>
> > In general, most properties depend essentially on certain features
> > that are not features of causal topology. Flying depends on
> > height, digestion depends on a particular physiochemical makeup,
> > tubes of toothpaste depend on shape and physiochemical makeup, and
> > so on. Change the features in question enough and the property in
> > question will change, even though causal topology might be
> > preserved throughout.
> Pentland:
> If this is all true then AI has some problems, as state of mind must be
> dependant on some environmental factors, hormones for example will
> alter one's perception of an otherwise identical situation... compare
> this to Chalmer's flying argument.
Agreed.
> > CHALMERS:
> > Such properties include knowledge (if we move a system
> > that knows that P into an environment where P is not true, then it
> > will no longer know that P), and belief, on some construals where
> > the content of a belief depends on environmental context. However,
> > mental properties that depend only on internal (brain) state will
> > be organizational invariants.
> Pentland
> If I know gto be 9.8N, and I went to the moon where g isn't 9.8N then I
> would still remember (know) 9.8N as g even though it would be
> irrelavent and incorrect in the context of being on the moon.
This is a bad example, you know g to be 9.8N on Earth, but as it is a ratio
of masses, and you're changing one of the masses when you go to the moon you
should know that g will change. The example of the rain outside the window
stopping when your back is to the window was given in class.
> > CHALMERS:
> > Phenomenal properties are more problematic. It seems
> > unlikely that these can be defined by their causal roles (although
> > many, including Lewis and Armstrong, think they might be). To be a
> > conscious experience is not to perform some role, but to have a
> > particular feel. These properties are characterized by what it is
> > like to have them, in Nagel's (1974) phrase. Phenomenal properties
> > are still quite mysterious and ill-understood.
> Pentland
> Getting ready for problems again... Seems to gloss quickly over the
> problems with his argument, not attempting to answer anything much.
I find it strange that he can spend a page reducing the importance of
something he doesn't understand to nothing.
> > CHALMERS:
> > But a computational model is just a simulation!
> > According to this objection, due to Searle (1980), Harnad (1989),
> > and many others, we do not expect a computer model of a hurricane
> > to be a real hurricane, so why should a computer model of mind be a
> > real mind? But this is to miss the important point about
> > organizational invariance. A computational simulation is not a mere
> > formal abstraction, but has rich internal dynamics of its own. If
> > appropriately designed it will share the causal topology of the
> > system that is being modeled, so that the system's organizationally
> > invariant properties will be not merely simulated but replicated.
>
> But they aren't replicated.... It is a simulation and the product will
> never in my view be complete.
>
> > It is precisely in virtue of this causal organization that the
> > system possesses its mental properties. We can rerun a version of
> > the "dancing qualia" argument to see this. In principle, we can
> > move from the brain to the Chinese room simulation in small steps,
> > replacing neurons at each step by little demons doing the same
> > causal work, and then gradually cutting down labor by replacing two
> > neighboring demons by one who does the same work. Eventually we
> > arrive at a system where a single demon is responsible for
> > maintaining the causal organization, without requiring any real
> > neurons at all. This organization might be maintained between marks
> > on paper, or it might even be present inside the demon's own head,
> > if the calculations are memorized. The arguments about
> > organizational invariance all hold here - for the same reasons as
> > before, it is implausible to suppose that the system's experiences
> > will change or disappear.
> Pentland:
> So will that demon be the simulation... If that demon is not
> intelligent will it learn to be intelligent if it is simulating a
> human, or is it bounded by it's own computational complexity?
If we have a demon who exactly replaces a neuron, so that their is no
external functional difference. Why would the demon be intelligent? Or
learn anything?it is just a neuron. If each neuron's work was copied
exactly I don't see the big demon that does the work of all the demons as
being different from the wet brain. It would still have the mind it had
before. The feasability of being able to simulate neurons I'll leave to the
study of neural networks
> > CHALMERS:
> > Discreteness and continuity. An important objection
> > notes that the CSA formalism only captures discrete causal
> > organization, and argues that some cognitive properties may depend
> > on continuous aspects of that organization, such as analog values
> > or chaotic dependencies...
>
> > ...In this case, the specification of discrete state-transitions
> > between states can be replaced by differential equations
> > specifying how continuous quantities change in continuous time,
> > giving a thoroughly continuous computational framework. MacLennan
> > (1990) describes a framework along these lines. Whether such a
> > framework truly qualifies as computational is largely a
> > terminological matter, but there it is arguable that the framework
> > is significantly similar in kind to the traditional approach; all
> > that has changed is that discrete states and steps have been
> > "smoothed out".
> Pentland
> Doesn't this make it impossible implement? All hardware must have some
> form of clock to enable some vauge form of symchronisation.
Good point, but TV has 24 frames (I think) in a second and on that peoples
actions seem as normal to me as they do in the flesh. Perhaps if the time
quanta used were made small enough there would be no difference for the
computation of cognition.
> > CHALMERS:
> > Even so, it is implausible that the correct functioning
> > of mental processes depends on the precise value of the tenth
> > decimal place of analog quantities. The presence of background
> > noise and randomness in biological systems implies that such
> > precision would inevitably be "washed out" in practice. It follows
> > that although a discrete simulation may not yield precisely the
> > behavior that a given cognitive system produces on a given
> > occasion, it will yield plausible behavior that the system might
> > have produced had background noise been a little different.
> Pentland:
> Guesswork at best... does it mean that the noise is irrelavent, I
> believe that noise etc. is important as the brain has less mass than is
> needed to process the amount of information that the nervous system is
> capable of carrying. Noise in the process of allocating resources and
> selecting which data to process will have catastrophic affects on the
> system.
> > CHALMERS:
> > In a similar way, a computationalist need not claim that
> > the brain is a von Neumann machine, or has some other specific
> > architecture. Like Turing machines, von Neumann machines are just
> > one kind of architecture, particularly well-suited to
> > programmability, but the claim that the brain implements such an
> > architecture is far ahead of any empirical evidence and is most
> > likely false. The commitments of computationalism are more
> > general.
> Pentland:
> But does this make it too general to implement?
If the brain implements an architecture for computation, whatever it is
should be Turing Equivalent and therefore we could reimplement it on a von
Neumann machine.
Brown, Richard
r.brown@zepler.org
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT