http://www.yorku.ca/dept/psych/classics/Lovelace/menabrea.htm
http://www.yorku.ca/dept/psych/classics/Lovelace/lovelace.htm
On Wed, 16 Feb 2000, Butterworth, Penny wrote:
> Butterworth:
> Although the Analytical Engine was completely incapable of intuition,
> it is interesting that research into artificial neural networks is now
> investigating the impact of machines using what is essentially
> trial-and-error in order to learn. Menabrea implies that such methods
> are at least one property of what he calls 'a thinking being', so his
> opinion on our discussions of intelligence may have been interesting,
> given new technologies.
The question of whether Artificial Neural Networks (ANNs) are (a)
computational, and (b) capable of thinking will come up later in the
course. Certainly ability to learn is a huge part of having a mind (and
hence of passing the Turing Test).
> Butterworth:
> To what degree ANNs can be described as 'an automaton which acts according
> to the laws imposed upon it' may be debatable. Although ANNs have
> predetermined rules for learning (such as Rosenblatt's Perceptron learning
> algorithm), these could be described as ways in which the network learns
> new laws or features of the input, which are not necessarily determined
> by the operator.
But if an ANN is computational, and it has an algorithm for learning,
and being able to do that (and to use its fruits) is thinking, than
that still means thinking is computation. Learning is the acquisition
of further DATA, to be sure, but the fact that those data are not part
of the learning algorithm itself to begin with does not imply that it's
not all just computation, does it?
And does it make any difference what the operator or designer knows?
Can't I come up with an algorithm that is, in a sense, "smarter" than
me, in that it can be used to do things I cannot do on my own? I still
came up with the algorithm; the algorithm is still computational; and
the means by which my brain came up with the algorithms could still
just be computational too, couldn't they?
> > MENABREA:
> > Thus, although it is not itself
> > the being that reflects, it may yet be considered as the being which
> > executes the conceptions of intelligence. The cards receive the impress
> > of these conceptions, and transmit to the various trains of mechanism
> > composing the engine the orders necessary for their action.
>
> Butterworth:
> Again, Menabrea implies a particular understanding of the word
> 'intelligence', such that he can separate reflection and the execution of
> 'the conceptions of intelligence'. It is likely that many people would
> understand or agree with this description, but it is difficult to define
> logically this distinction, even for as simplistic a machine as the
> Analytical Engine. Perhaps this in itself is a property of human
> intelligence!
You are right that there are some rather vague and perhaps even
untenable distinctions being made here. We have intuitions about what
is and isn't intelligent, and what can and cannot be done by
computation. But those intuitions could be wrong. What can and can't we
say for sure about intelligence and computation?
> > Egerland:
> > In my opinion Menabrea's interpretation of the third point - economy of
> > intelligence - is quite interesting. Though he already pointed out that
> > the machine does not have any intelligence itself, according to his
> > point of view it still raises the amount of intelligence available. It
> > does so by not bothering an intelligent human being with monotonous
> > tasks which can be fulfilled automatically. The question is just if
> > human beings really use their intelligence to think about more complex
> > problems than the ones that the machine can work on, or if it rather
> > prefers to use the regained time for recreational matters.
>
> Butterworth:
> I think I disagree with Matthias here, as in my opinion Menabrea's
> comment does not necessarily imply that the introduction of the machine
> increases the amount of intelligence available. I think what he was
> trying to say was that the intelligent human engineers/scientists etc had
> been wasting their time doing menial mathematical tasks which did not
> really require their intelligence, only their time. With the machine
> doing these tasks for them, their time could be used more fruitfully (in
> whatever way).
Reasonable disagreement. Or there might be more and less intensive uses
of intelligence, with rote calculation being less intensive and, say,
theorizing about intelligence being more intensive. But whether or not
it is all computational does not yet seem to be decided, does it?
Stevan
P.S. Everyone please make sure that in quote/commenting you neither
quote less nor more than necessary. To little would be a quote that
cannot be understood on its own, so it is not clear to other skywriters
what you are really commenting on. Too much (and your own skywriting
erred more in this direction) is when you quote more than is needed to
understand what is being said about what.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT