Re: Babbage/Menabrea: Analytical Engine

From: Button David (drb198@ecs.soton.ac.uk)
Date: Thu May 24 2001 - 11:21:46 BST


>HARNAD:
>So you see, today's question is already raised from the outset: Could
>the intellectual/understanding part (intelligence) itself be
>mechanical (computational) too? Or is computation merely an aid to the
>intellect?

Button:
Personally I would have to say that the understanding is not computation.
However, I don't believe that computation is not part of it. It could
well be that computation is vital to understanding, but there is surely
'something else' that is not computation that uses computation in order
to perform tasks. What I mean is that computation can be used by any
entity (within reason), in my belief it is the use of the computation
that is the intelligent part and therefore computation is an aid.

>> Button:
>> By this Menabrea suggests that the engine itself is not intelligent,
>> instead it aids intelligence.
>
>HARNAD:
>But what does that mean about the nature of intelligence itself? Does it
>mean that what is going on in the head of the intelligent being cannot
>be just computation too?
>
>Could the "interpretation" problem be the symbol grounding problem?

Button:
I don't think that intelligence is just computation. While computation is
obviously 'useful', it cannot be the whole story otherwise a simple
calculator would be classed as intelligent being. In this way the engine
is surely only a creation of the intelligent being, and not an
intelligent being itself.

The interpretation problem could be the symbol grounding problem. The
inability of the engine to interpret results is similar, in a sense, to
Searle's Chinese Room Argument. The engine cannot interpret results
because it does not 'understand' what it is doing (like Searle speaking/
writing Chinese). All the machine is doing is acting on some rules.

In terms of symbol grounding, this is part of the problem with the engine
understanding. If the symbols the engine uses were grounded then perhaps
the symbols would take on meanings to the engine, and therefore
understanding to the machine.

>>Button:
>>Although this can be interpreted to be unintelligent due to the simple
>>following of set rules, if this were the case of a considered
>>'intelligent' system (human for example), the following of such an
>>algorithm would be taken to be intelligent - not necessarily highly
>>intelligent (whatever that means) - but surely intelligent nonetheless.
>
>HARNAD:
>I couldn't follow this. If the computation, in Babbage's case, was the
>mechanical, unintelligent part, which the intelligent being was relieved
>of having to do, by the aid of the engine, then what would it take to
>make the engine intelligent (forward engineering)? And what is going on
>in the head of the intelligent being (reverse engineering) if it is not
>just computation?

Button:
Clearly the engine itself only uses computation, and by my own comments
this is not intelligence. To make the engine intelligent then perhaps
the addition of some form of symbol grounding would further intelligence.

It is my belief that it is understanding that makes a being intelligent.
What generates understanding I don't know, but it is not the simple case
of 'knowing' something. I 'know' the formula for quadratic equations, but
I do not understand it (for example).

David Button - drb198@ecs.soton.ac.uk



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST