On Tue, 14 Mar 2000, Terry, Mark wrote:
> > > > McCARTHY:
> > > > Machines as simple as thermostats can be said to have beliefs, and
> > > > having beliefs seems to be a characteristic of most machines capable of
> > > > problem solving performance.
> > >
> > > Blakemore:
> > > I agree with Searle that this is a silly remark. We should only consider
> > > a machine to have intelligence when it passes at least T2 (perhaps it
> > > should be higher).
> >
> > Harnad:
> > Note, though, that the "silly remark" is made by John McCarthy, the
> > father of AI!
>
> Isn't the point of McCarthy's remark that if the computationalist view
> holds, beliefs are represented as computational states, then a
> thermostat has beliefs because it has different states (e.g. on and
> off).
If computationalism is true, then beliefs are not "represented by"
computational states (everything, including blood flow, can be
"represented by" computational states: that's just the C/T Thesis);
rather, beliefs ARE just computational states.
But it is the truth of computationalism that is on trial here! (No need
for a trial if we can just presuppose that it's true.)
And if computationalism is not true, it is worse than silly, but
downright incorrect to say that a thermostat -- or a heart, or a
weather-vane, or the weather [which likewise has "states"] -- has
beliefs.
If we are to continue to speak the same language we spoke before the
advent of computationalism and ITS beliefs, the word "belief" refers to
a mental state that entities with mental states (only) can have. So it's
down to whether or not a thermostat (or a computer) is that sort of an
entity.
(Remember that there are other possibilities, including hybrid,
computational/noncomputational states and systems.)
> With this strict reasoning, the question of "where is the other
> mind?" in the chinese room argument is irrelevant - there is just a
> symbol system, states & algorithms. This is the exact same thing that
> is going on in Searle's mind.
There may or may not be (among other things) a symbol system, states
and algorithms being implemented in Searle's BRAIN; but the proposition
on trial here is whether the implementation of that symbol system is
enough to generate mental states, including beliefs
(Chinese-understanding, etc.).
Searle's Chinese Room Argument attempts to show that it is not.
> By implementing the program, he is implementing two symbol systems
> (what we have termed his mind, and what by the same logic is the mind
> of the chinese pen pal program).
We have NO IDEA how Searle's (English-Understanding) mind is being
implemented. (To assume it's just a symbol system is to beg the
question, and prejudge the outcome of this trial!)
We do know how the Chinese T3-passing capacity is being implemented:
by the sguiggling that Searle is doing.
Searle shows that in the latter implementation (the squiggling)
there is no Chinese understanding going on. That's all. Bad news for
the hypothesis that beliefs and understanding are just squiggling...
> The concept, of a mind being present
> as something we can identify other than a system doing computation,
> instantly goes against the fundamental point of the computationalist
> view, so the task to locate this concept we have called a 'mind' (lets
> face it, who wants to believe they are just squiggles and squagles) is
> itself futile - McCarthy's view is that this symbol system IS ALL A
> MIND IS.
It is one thing to take a view; another for the view to be correct; yet
another to show how/why it is correct. So far you have only said this:
McCarthy supposes that squiggling is all there is to being a mind.
It remains to show that that supposition is true (repeating it doesn't
help); and it's an uphill battle to show it's true after Searle has shown
the opposite (if he has done so)...
Stevan
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT