From: HARNAD Stevan (firstname.lastname@example.org)
Date: Mon Mar 19 2001 - 12:48:17 GMT
On Thu, 1 Mar 2001 wrote:
> Menabrea draws a distinction between (mathematical) computation, the
> manipulation of symbols according to rules or 'laws', and the faculties
> required for 'understanding', by which he appears to mean the ability to
> semantically interpret symbols. He clearly explains that his machine will
> only be able to execute the former; the latter must be left to the
> (human) intellect.
So what does this imply about intelligence? If mechanically
manipulating symbols can be done by both a person and a machine, but
interpreting the symbols and the manipulations can only be done by a
person, what is going on in the person's head when that is happening
(presumably not just computation too?)?
> > MENABREA:
> > It is necessarily thus; for the machine is not a thinking being, but
> > simply an automaton which acts according to the laws imposed upon it.
> When one considers something as 'simple' as Babbage's analytical machine,
> this may seem obvious. However, it must be borne in mind that modern
> computers are fundamentally based upon the same design as Babbage's
> analytical engine. They both have memory stores for variables and are both
> capable of carrying out arithmetic operations on those variables using a
> processor (in Babbage's machine, this device is termed the
> 'mill') according to a given program. In short, they are both
> computational devices -- Turing machines.
So far I could follow you: That modern computers are basically the same
as Babbage's machine, so whatever is true of the one is true of the
other. But then you say:
> From a strong AI standpoint,
> even Babbage's machine may thus be capable of 'thinking' given the correct
> algorithm, since strong AI claims that intelligence is
> implementation-independent, and thus may be implemented by computation
Yes, but now we must ask ourselves whether Strong AI is correct.
> Menabrea's claim is based on the fact that the machine does not *appear*
> to think; if however it were to exhibit characteristics indicative of
> intelligence whilst executing such an 'intelligence' algorithm, he would
> be hard pressed to say that the machine was not indeed thinking: after
> all, how do we know anyone else apart from ourselves thinks, unless it be
> through the evidence of their words and actions (i.e. their 'output')?
True. But remember the confusion with the fancy name I kept referring to
in class: Don't confuse "ontic" and "epistemic" matters. One has to do
with what there is, the other with what you can know. It is true I can't
know whether anyone else is thinking. But whether or not they really are
thinking does not depend in any way on whether or not I can know it!
What does truth and reality owe to me and my limited capacities of
knowing what's true and what's real?
So just as, for example, quarks and superstrings whether do or do not
exists, whether or not we can know it, so either a system is thinking or
isn't thinking (is/isn't intelligent, does/doesn't have a mind),
irrespective of whether we can know it. Such things can no more be
legislated by what we do/don't can/can't know than they can be legislated
by definitions ("let us "define" intelligence as XYZ").
Besides, there is one case where you CAN know for sure, and that is your
own case. You know you are intelligent (thinking, have a mind), and that
THAT is what you mean (not something else) when you ask about whether
any other entity is intelligent.
And therefore you could be very wrong with your word/action (T2/T3)
test: The words/actions could be there, but the intelligence (thinking,
mind) could be absent.
> > MENABREA:
> > Considered under the most general point of view, the essential object of
> > the machine being to calculate, according to the laws dictated to it,
> > the values of numerical coefficients which it is then to distribute
> > appropriately on the columns which represent the variables, it follows
> > that the interpretation of formulae and of results is beyond its
> > province, unless indeed this very interpretation be itself susceptible
> > of expression by means of the symbols which the machine employs.
> Menabrea clarifies the role of the machine as nothing more than a symbol
> manipulator. The symbols are subject to human interpretation alone, unless
> it turns out that intelligence (and thus semantics) are just computation,
> as computationalism claims.
Yes, this is the fundamental question.
> > MENABREA:
> > Yet it is by the laborious route of analysis that he must reach
> > truth; but he cannot pursue this unless guided by numbers; for without
> > numbers it is not given us to raise the veil which envelopes the
> > mysteries of nature.
> Menabrea appears to make the contentious implication that mathematics is
> a part of nature, and not a product of how we think about nature (as Kant
> and others have argued). It appears at least to me that mathematics is
> shaped by nature rather than being an intrinsic part of its 'design'; as
> such it is just a model, invented by humans to describe nature, rather
> than a discovery about how nature actually works. The currently uncertain
> position of mathematics has implications for computationalism and strong
> AI: if mathematics is just a model of how the world works, can
> (mathematical) computation do any more than *model* intelligence? It seems
See the lecture noter on computation and the foundations of mathematics
(realism, formalism, logicism, intuitionism)
Another possibility is that, rather than "nature" (physics/biology?)
shaping mathematics, mathematics constrains nature (and hence science)
in the same way that logic constrains it: No fact of nature is free to
be both true and false at the same time. In the same way, 2 + 2 is not
free to = 3.
Do you really think we have to settle the foundations of mathematics
before we can settle the foundations and methods of cognitive science?
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:23 BST