**From:** Wright Alistair (*afw198@ecs.soton.ac.uk*)

**Date:** Tue May 29 2001 - 05:37:56 BST

**Next message:**Cove Stuart: "Re: Dennett: Making Conscious Robots"**Previous message:**Basto Jorge: "Re: Turing Test question"**Maybe in reply to:**Sparks Simon: "Harnad: Cognition Isn't Computation"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

HARNAD:

*> The fathers of modern computational theory (Church, Turing, Goedel,
*

*> Post, von Neumann) were mathematicians and logicians. They did not
*

*> mistake themselves for psychologists. It required several more
*

*> decades for their successors to begin confusing computation with
*

*> cognition (Fodor 1975; Newell 1980; Pylyshyn 1984; Dietrich 1990)
*

Wright:

Here, the author examines Computationalism, the idea that cognition is

just a form of computation. This idea relates directly to that of

the strong CTTP.

HARNAD:

*> There is a natural generalisation of CTT to physical systems (CTTP).
*

*> According to the CTTP, everything that a discrete physical system can
*

*> do (or everything that a continuous physical system can do, to as
*

*> close an approximation as we like) can be done by computation. The
*

*> CTTP comes in two dosages: A Weak and a Strong CTTP, depending on
*

*> whether the thesis is that all physical systems are formally
*

*> equivalent to computers or that they are just computers.
*

Wright:

The strong CTTP argument would imply that mathematicians have become

physicists as opposed to psychologists, using computation to not just

describe continuous physical systems (that is just weak CTTP), but to

sayinb there is no difference between computing a physical process and

actually performing it. It is easy to discard this view, as Harnad

points out.

HARNAD:

*> But I actually think the Strong CTTP is wrong, rather than just
*

*> vacuous, because it fails to take into account the all-important
*

*> implementation- independence that does distinguish computation as a
*

*> natural kind: For flying and heating, unlike computation, are clearly
*

*> not implementation- independent. The pertinent invariant shared by
*

*> all things that fly is that they obey the same sets of differential
*

*> equations, not that they implement the same symbol systems
*

Wright:

But, it is not clear at all whether cognition is implementation

dependent.

Harnad's problems with computation seems to be partly based on his

analysis of Searle's Chinese Room Argument.

HARNAD:

*> So I see Turing as championing machines in general that have
*

*> functional capacities indistinguishable from our own, rather than
*

*> computers and computation in particular. Yet there are those who do
*

*> construe Turing's Test as support for C=C. They argue: Cognition is
*

*> computation. Implement the right symbol system -- the one that can
*

*> pass the penpal test (for a lifetime) -- and you will have
*

*> implemented a mind. Unfortunately, the proponents of this position
*

*> must contend with Searle's (1980) celebrated Chinese Room Argument,
*

*> in which he pointed out that any person could take the place of the
*

*> penpal computer, implementing exactly the same symbol system, without
*

*> understanding a word of the penpal correspondence. Since computation
*

*> is implementation-independent, this is evidence against any
*

*> understanding on the part of the computer when it is implementing
*

*> that same symbol system.
*

Wright:

This notion of 'understanding' is hard to pin down. Yet it seems to

be the crux of the argument: No matter how elaborate our computational

systems for producing cognition, the implementation must occur,

physically, in some actual dynamic system. The Church-Turing thesis

tells us that the exact form of the implementation is not important,

in fact, the mechanism cannot necessarily bear any relation to what is

being computed. C=C would use CTTP to say that our brains are no more

powerful than any other computation mechanism. We cannot make anything

of this in relation to our brains, since they are the only example of

'cognition machines' we have yet identified. The problem is in the

But the problem runs deeper than this, which seems to be an application

of the 'other minds' argument. It is a problem with what computation is

computation, 'formal symbol manipulation'.

HARNAD:

*> So although it is usually left unstated, it is still a criterial, if
*

*> not a definitional property of computation that the symbol
*

*> manipulations must be semantically interpretable -- and not just
*

*> locally, but globally: All the interpretations of the symbols and
*

*> manipulations must square systematically with one another, as they do
*

*> in arithmetic, at the level of the individual symbols, the formulas,
*

*> and the strings of formulas. It must all make systematic sense, in
*

*> whole and in part (Fodor & Pylyshyn 1988).
*

Wright:

Weak CTTP says any physical process can be represented as symbols, and

simulated with computation. But symbols are arbitrary, they are only

notational, we assign them meaningful description, to be a chess game,

or to be a description of water flow, whatever.

HARNAD:

*> Meaning does not enter into the definition of formal computation.
*

Wright:

What is important about computation is that it does not have to be

meaningful. A computational system has to be consistent, and this

possibly forces it to be decipherable somehow, but there is nothing

about its symbols that enforce a particular meaning; all meaning is

construed by the observer.

This is the problem with Computationalism. The idea that cognition

can be, and is, entirely based on formal symbol manipulation, says

that what goes on in our own mind is identical to what can be performed

on paper, or by Searle's brain, or by a pocket calculator. It may be

that mental processes are describable like this, but, there is no

practical reason to assume our thought processes are somehow computed,

even if it isn't obviously not the case. Searle's argument against it is

quite convincing in this respect. And, no observer exists to interpret

our own thoughts systematically, either.

After pointing out that simulated physical systems are not functionally

equivalent to real ones, Harnad considers simulated cognitive systems.

HARNAD:

*> A bit less obvious is the equally valid fact that a virtual pen-pal
*

*> does not think (or understand, or have a mind) -- because he is just
*

*> a symbol system systematically interpretable as if it were thinking
*

*> (understanding, mentating).
*

Wright:

This is does seem less than obvious. A real penpal, for instance, is

only systematically interpretable as having understanding. It may be

that a T2 system equipped with a modem would be perfectly capable of

corresponding with other people via e-mail, or any other type of

text-based internet communication. The modem functionality would also

give the T2 penpal access to text, picture, and sound information to

refer to. But, would such an attachment be enough to overcome the symbol

grounding problem? Harnad points out that sensorimotor capability, in

other words a T3 system, would be adaquate.

HARNAD:

*> To ground it, one would have to build a real T3 robot -- and I hope it
*

*> is obvious that that would not amount to merely attaching sensorimotor
*

*> transducers to the computer doing the simulation (any more than
*

*> building a real plane or furnace would amount to merely attaching
*

*> sensorimotor transducers to their respective simulations)
*

Wright:

Here, Harnad seems to imply that to build a fully functional T3 system,

one would have to design sensory capabilities into it, or design it

around sensorimotor functionality. This seems to make sense; to make a

grounded cognitive system, one would have to integrate it into its

environment as much as possible. The brain is possibly largely sensory,

after all. And, ignoring some more flexible notions of what counts as

cognitive (eg, Dennett's chess playing), in favour of Turing's views,

sensory capablities would be essential for a system to truly interact

with its environment, and thus be seen as cognitive, in the same way

as us.

**Next message:**Cove Stuart: "Re: Dennett: Making Conscious Robots"**Previous message:**Basto Jorge: "Re: Turing Test question"**Maybe in reply to:**Sparks Simon: "Harnad: Cognition Isn't Computation"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.4
: Tue Sep 24 2002 - 18:37:31 BST
*