From: Henderson Ian (email@example.com)
Date: Thu May 24 2001 - 15:45:23 BST
In reply to: HARNAD Stevan: "Re: Babbage/Menabrea: Analytical Engine"
>> From a strong AI standpoint,
>> even Babbage's machine may thus be capable of 'thinking' given the
>> correct algorithm, since strong AI claims that intelligence is
>> implementation-independent, and thus may be implemented by computation
> Yes, but now we must ask ourselves whether Strong AI is correct.
I do not think it is myself: I don't believe that machines will ever become
intelligent through the manipulation of intrinsically meaningless symbols
along. The Frame problem (as has been encountered in practice in AI
programming) and Searle's Chinese Room argument (a more theoretical
objection) both lead me to believe that computation is not wholly sufficient
to create an intelligent mind. A way must be found to attach meaning to
symbols, so that every symbol the machine uses is either directly grounded,
or indirectly grounded through other symbols, to the real world phenomena to
which they relate.
>> Menabrea's claim is based on the fact that the machine does not *appear*
>> to think; if however it were to exhibit characteristics indicative of
>> intelligence whilst executing such an 'intelligence' algorithm, he would
>> be hard pressed to say that the machine was not indeed thinking: after
>> all, how do we know anyone else apart from ourselves thinks, unless it be
>> through the evidence of their words and actions (i.e. their 'output')?
> True. But remember the confusion with the fancy name I kept
> referring to in class: Don't confuse "ontic" and "epistemic"
> matters. One has to do with what there is, the other with what
> you can know. It is true I can't know whether anyone else is
> thinking. But whether or not they really are thinking does not
> depend in any way on whether or not I can know it! What does
> truth and reality owe to me and my limited capacities of knowing
> what's true and what's real?
> So just as, for example, quarks and superstrings whether do or do
> not exists, whether or not we can know it, so either a system is
> thinking or isn't thinking (is/isn't intelligent, does/doesn't
> have a mind), irrespective of whether we can know it. Such things
> can no more be legislated by what we do/don't can/can't know than
> they can be legislated by definitions ("let us "define"
> intelligence as XYZ").
Basically you are saying that there is an objective reality that exists
outside my mind: that the world and other people are not an illusion;
they exist independently of my own thoughts. This philosophy (positivism)
forms the basis of scientific endeavor: the assumption that there *is*
a world out there the workings of which can be investigated. I agree with
this: how can science get anywhere at all if we adopt a relativist stance?
> Besides, there is one case where you CAN know for sure, and that
> is your own case. You know you are intelligent (thinking, have a
> mind), and that THAT is what you mean (not something else) when
> you ask about whether any other entity is intelligent.
> And therefore you could be very wrong with your word/action
> (T2/T3) test: The words/actions could be there, but the
> intelligence (thinking, mind) could be absent.
The only way anyone can ever decide whether a candidate passes the Turing
test is by observing their words and actions, examining them, and comparing
them with their own words and actions; I don't think I had fully understood
what it meant to pass the test until now. I accept that the result of the
test does not at all imply that the candidate actually does have a mind
(only it can know that, if indeed it does!), but I had thought it should
constitute evidence for suggesting that it does. I had realised that if
Babbage's analytical engine running a certain algorithm were to appear to be
thinking to Menabrea, this doesn't mean that it actually is: as Turing says,
if it's indistinguishable from you, you have no grounds for saying
that it *isn't* intelligent, but this does not mean to say that it actually
is!: what we can know and what there is are not one and the same. All we can
conclude about a candidate passing the Turing test is that it would be
fallacious to say that it didn't have a brain (there is no good observable
reason for concluding that it *doesn't*): this is the fundamental difference
between a candidate that passes and one that fails the test. I don't think I
had quite grasped the implication of the Turing test up until now, thinking
that although words and actions couldn't actually *determine*
whether a being was intelligent or not, I thought that they must provide
empirical *evidence* for the proposition that the being was intelligent: on
reflection, I see that this isn't the case.
>>> Yet it is by the laborious route of analysis that he must reach
>>> truth; but he cannot pursue this unless guided by numbers; for without
>>> numbers it is not given us to raise the veil which envelopes the
>>> mysteries of nature.
>> Menabrea appears to make the contentious implication that mathematics is
>> a part of nature, and not a product of how we think about nature (as Kant
>> and others have argued). It appears at least to me that mathematics is
>> shaped by nature rather than being an intrinsic part of its 'design'; as
>> such it is just a model, invented by humans to describe nature, rather
>> than a discovery about how nature actually works. The currently uncertain
>> position of mathematics has implications for computationalism and strong
>> AI: if mathematics is just a model of how the world works, can
>> (mathematical) computation do any more than *model* intelligence? It
>> seems not.
> Another possibility is that, rather than "nature"
> (physics/biology?) shaping mathematics, mathematics constrains
> nature (and hence science) in the same way that logic constrains
> it: No fact of nature is free to be both true and false at the
> same time. In the same way, 2 + 2 is not free to = 3.
> Do you really think we have to settle the foundations of
> mathematics before we can settle the foundations and methods of
> cognitive science? Why? How?
On reflection, I don't think it is necessary to do so. Although I don't
think that mathematics constrains nature (but rather that mathematics is a
product of how we think about nature) I don't think this precludes its use
as a scientific tool to help us investigate (or even create) intelligence.
Computation may not be a constituent of what makes *us* intelligent, but
like the mechanical workings of a prosthetic limb, it may help us create a
thinking being by offering an alternative means to model our own
functionality. In doing so, we would hopefully gain some insight into how
our own brains actually do work. From this perspective, mathematics can help
us understand intelligence, and perhaps even help us create it, although I
do not think that such a part-computational mind would be working in the
same way to our own. In consequence, I think that even if mathematics can do
no more than model the way our own minds work, this wouldn't preclude
computation from playing a role in the creation of an intelligent being,
contrary to my previous claim above.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST