**From:** Hudson Joe (*jh798@ecs.soton.ac.uk*)

**Date:** Sat Jun 09 2001 - 00:59:10 BST

**Next message:**Clark Graham: "Re: the Sony Turing test case (fwd)"**Previous message:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**In reply to:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**Next in thread:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**Reply:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*> Hudson:
*

*> whatever analogue computation is a digital computer could approximate
*

*> the physical behaviour of the implementation to the nth degree
*

*>HARNAD:
*

*>This just means that a digital computer could simulate any continuous
*

*>process to as close an approximation as we wish. But this is still
*

*>just simulation (Turing Equivalence, or even Strong Equivalence -- as
*

*>close as we like). But, for the same reason that simulated flying (no
*

*>matter how closely it approximates it) is not real flying, simulated
*

*>continuity is not real continuity.
*

Hudson

Sorry to go through this yet again but what is real flying? If the

distinction between real and simulated is just that for the real thing

the implementation is critical, then we can recreate the same situation

inside a VR booth. Now we are inside the fully immersive (caters for

all the bodies senses) VR system. We 'see' a plane flying over our head

and at the same time a computer on a desk by a VR booth in front of us.

Every atom of the computer is fully modelled by the VR simulation. The

computer is running a VR simulation of a plane flying.

At this point we hastily step outside the VR booth. Now both the VR

simulation and its simulation of another VR simulation merge into the

same domain of implementation independent computation, as I'm sure you

would agree. However if we step back inside the VR booth, from this

perspective ( and that surely is all we have, just a distorted

perspective) the plane we see above our heads is not implementation

independent. If according to the (hidden) aerodynamics model the wings

are the wrong shape then the plane wont fly. Sure we could say that its

all really running on a computer and that computer could be any shape

or size so long as it was Turing equivalent, but then the computer is

no where to be seen, as we are in the VR sim. i.e there is no evidence

that we could possibly gather from within the VR sim that the plane we

see flying above our head is anything but implementation DEPENDENT, or

for that matter a flower we might touch and smell isn't.

Now (still in the VR booth) our attention is drawn to the computer on

the desk. We see a plane flying across the screen. Obviously just an

implementation independent simulation. We decide against our better

judgement to step inside this VR booth. Now by the same token what was

implementation independent becomes implementation dependent. We can

imagine an infinite regress.

Could all this be countered by pointing out that if your heart stops in

the real world then all those nested simulations vanish and only the

real world remains, hence all they ever were were computer simulations?

Not completely.

Whether or not 'we' are causal systems it appears we do need a brain to

function in the real world. I think it can be excepted that our brains

are causal systems and hence there functionality can be recreated in

simulation. But could the actual 'physical' implementation of our

brains matter somehow? Well if we say it is only our brain which

governs our behaviour and we imagine a body with a brain (TimII) in our

VR sim then it should act in exactly the same sort of way we do. So

from a functionalist point of view we are implementation independent.

This leads to the observation that if TimII also enters the second VR

booth and his heart, which was Turing indistinguishable to us from our

own in the first VR sim, stops then he will cease to function in the

second VR sim too. All this means is that TimII's existence will halt

in all VR sim nested levels up to the one where he was first defined

and physically modelled. For a functionalist there should be no

essential difference between 'physically defined' and 'physically

situated'. This objection only illustrates the significance of the

'original VR sim nest level', it does not in essence distinguish

between reality and simulation.

There are three main point here:

1/ Implementation independence/dependence (II, ID) depends on your

plane of awareness, i.e. for TimII the airplane he sees is ID but for

us (outside all the VR booths) it is II simulation. What is to say we

are not in the same position as TimII?

2/ We have no basis to say that our plane of awareness is somehow the

original and 'true' physical existence.

3/ When we say 'real flying' we should be aware of 1/ and 2/.

*> HARNAD:
*

*> But never mind continuity. Perhaps it is more instructive to think of
*

*> every dynamical physical system, even an airplane, as an "analog device"
*

*> of some sort (not necessarily a "computer"). That gives us a better
*

*> sense of the gap between the analog and the symbolic (better than
*

*> "digital," which focusses too much on just the continuous/discrete
*

*> distinction).
*

Hudson:

Again symbolic systems are disallowed from being analogue. But why?

Did my example (Tom the AI) not show how a symbolic system is best

viewed as continuious? I say 'viewed as continuious' and not 'is

continuious' because any physicaly implemented system (yes even an

airplane) can be viewed as discrete at one scale and continuious at

another. Something that is considered analogue (in this sense) just

means 'best viewed as' continuious.

* > Hudson
*

* > First off how can a state be continuous? State implies something which
*

* > is bounded, so perhaps a sine wave at frequency F1 could be used to
*

* > represent state S1. But then only the representation of the state (the
*

* > sine wave) would be continuous, the state S1 itself would be static and
*

* > discrete. Perhaps MacLennan ment 'the representation of states' rather
*

* > than, "representational states".
*

*>
*

*>HARNAD:
*

*> No, he just meant states in the usual physical sense, described by
*

*> differential equations, hence continuous.
*

Hudson

I see your point. I do think the flexible use of 'state' causes

problems though. When relating to differential equations in time the

'continuious state' when viewed at one time will appear different when

viewed at another time, hence they could not be recognised as the same

state. It gets even more confusing when people say mental states are

computational states, in my opinion.

*> Hudson:
*

*> Secondly why is transduction a "central issue in symbol grounding"? So
*

*> long as the method of energy conversion from sound pressure, light,
*

*> mechanical resistance, etc. to electrical signals provides enough
*

*> information for the computational bits to function properly why worry
*

*> about it? Isn't the symbol grounding problem more, 'how do we use the
*

*> transduced electrical signals to terminate the hierarchical symbolic
*

*> definition chain?' (But then even if symbols were optimally grounded
*

*> I've no idea how this would conjure up meaning in the system.) Did
*

*> Harnad really say that transduction was the central issue?
*

*> HARNAD:
*

*> Yes he did (if I do say so myself): because sensorimotor transactions
*

*> with the objects symbols refer to are the only ones that CANNOT be
*

*> symbolic. And chances are, they are part (literally, physically part)
*

*> of whatever physical states mental states turn out to be. Hence
*

*> mental states, unlike computational states, will not be
*

*> implementation-independent.)
*

*>
*

*> So transduction is the most important non-symbolic process, but it's
*

*> unlikely to be the only one (as neuropharmacology is showing us).
*

Hudson

"whatever physical states mental states turn out to be"? Brain states

might be physical but what indication is there that mental (feeling)

states are. Even if there is a 1:1 correlation between the two how do

we know which causes which? This is like saying 'my hand is feeling'

when surely it is 'I am feeling'.

I agree transduction needs to be there, but not that it is a 'central

issue'.

Joe

P.S.

Regarding the idea of a complete world simulation where we could in

principle fastforward to see ourselves that you spoke of on a few

occasions. In this simulation would be a simulation of this simulation

(and so on) would it not? Doesn't this leads to an infinite performance

capacity? (e.g. imagine placing the simulation virtual viewing camera

at the viewing screen of the simulated simulation). In which case it is

not physically realisable.

By the way, how was the Dennett talk?

**Next message:**Clark Graham: "Re: the Sony Turing test case (fwd)"**Previous message:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**In reply to:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**Next in thread:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**Reply:**HARNAD Stevan: "Re: MacLennan: Grounding Analogue Computers"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.4
: Tue Sep 24 2002 - 18:37:31 BST
*