On Thu, 2 Mar 2000, Terry, Mark wrote:
> >Harnad:
> >Does a system that specifies a causal structure computationally, have
> >that causal structure. Part of the causal structure, for example, of an
> >airplane, is that it is able to lift of the ground and fly. Does a plane
> >simulator that simulates its causal structure have the causal power to
> >fly?
> >
> >If so, then the same thing should work for the mind. But if not; if the
> >simulation "specifies" the causal structure but doesn't actually "have"
> >it, then that's another story.
> >
> >What do you think (and why?)
>
> Clearly modelling the causal structure of a system computationally does
> not mean the simulation HAS that causal structure, rather it uses the
> causal structure to tell us about the ACTUAL system. However, I would
> propose that this point doesn't matter when considering AI. We are trying
> to model an abstract idea (what we have decided is "intelligence") in a
> system, and as such a system which models an aeroplane does not need to be
> able to fly, it just needs to be able to tell us what would happen to some
> properties of the aeroplanes flight given some (input) conditions. In the
> same way, a computational model of the mind which outputs instructions
> such as "speak" or "move", as I belive it is reasonable to assume our
> minds do, would be simulating intelligence without actually possessing it.
>
> Is this not enough for the machine to do? Certainly it would be expected
> to pass the Turing test. To model a mind, the machine does not have to BE
> the mind. This, I think, is the "artificial" part of AI.
Very well put point. And, as I said in class, this is what Searle called
"Weak AI," in which the system that passes the TT is not really BEING a
mind, but merely being an oracle that predicts what a mind would do, in
the same way that a solar system model predicts what a solar system
would do, without being one.
By this token, a virtual robot in a virtual world would pass a "virtual
TT." But to pass the real TT, it would have to BE a real robot, in the
real world.
Keep that in mind when you consider Searle's Chinese Room Argument,
which is meant to challenge "Strong AI," according to which the
TT-passer would BE a mind, rather than merely a predictor of what
someone with a mind would say/do. What is on trial is whether or
not computational states can BE mental states; not just whether they can
be used to predict what mental states would DO. (Many AI researchers
have held -- and still do -- that computation = cognition. Pylyshyn,
this week's skywreading, does, for example.)
Stevan
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT