Re: Lucas: Minds, Machines and Goedel

From: Bell Simon (smb398@ecs.soton.ac.uk)
Date: Thu Mar 01 2001 - 23:51:09 GMT


LUCAS: Minds, Machines and Goedel
http://cogprints.soton.ac.uk/documents/disk0/00/00/03/56/index.html

>LUCAS:
>Goedel's theorem seems to me to prove that
>Mechanism is false, that is, that minds cannot
>be explained as machines. So also has it seemed
>to many other people: almost every mathematical
>logician I have put the matter to has confessed
>to similar thoughts, but has felt reluctant to
>commit himself definitely until he could see the
>whole argument set out, with all objections fully
>stated and properly met.

Bell:
Apart from attempting to influence the reader, this
statement contains no facts. Perhaps it is telling
that Lucas could find no logician of importance to
place their name in support of the finished
article, as he invariably must have attempted.

>LUCAS:
>Goedel's theorem must apply to cybernetical machines,
>because it is of the essence of being a machine,
>that it should be a concrete instantiation of a
>formal system. It follows that given any machine
>which is consistent and capable of doing simple
>arithmetic, there is a formula which it is incapable
>of producing as being true---i.e., the formula is
>unprovable-in-the-system-but which we can see to be
>true. It follows that no machine can be a complete
>or adequate model of the mind, that minds are
>essentially different from machines.

Bell:
Lucas' statement can only be accepted, if it is also ]
accepted that the brain, at the fundamental level
of neural activity, can prove a Goedel problem.
As this is wildly contested, the reliance of
Goelel's theorem as a source of proof must also be
contested.

>LUCAS:
>When we {45} consider the possibility that the mind
>might be a cybernetical mechanism we have such a
>model in view; we suppose that the brain is composed
>of complicated neural circuits, and that the
>information fed in by the senses is "processed" and
>acted upon or stored for future use. If it is such a
>mechanism, then given the way in which it is
>programmed---the way in which it is "wired up"---and
>the information which has been fed into it, the
>response---the "output"---is determined, and could,
>granted sufficient time, be calculated.

Bell:
It must be remembered at all times that the brain
does not only perform continous functions upon the
input of its senses and memory. As the brain is a
physical device, it is affected by the surrounding
environment in which it resides. Neural activity is
not similar ot that of an ordinary electrical
circuit, as outside interference is not compensated
for. Neurons fire sparadically and only lean
towards following high-level rules of one neuron
triggering another in a given circumstance. A neuron
can fire itself with no received input signals due
to outside interference, and similarly may not fire
at a time when its input dictates that it should.
Neural activity can only be modelled and predicated
if the state of the whole universe is within the
model, so as to remove the effect of chaos upon the
model.

>LUCAS:
>Our idea of a machine is just this, that its
>behaviour is completely determined by the way it is
>made and the incoming "stimuli": there is no
>possibility of its acting on its own: given a
>certain form of construction and a certain input of
>information, then it must act in a certain specific
>way.

Bell:
This may be Lucas' interpretation of a machine, but
it cannot be used to model the brain due to
fundamental flaws. Although the structure and state
of the brain is the primary factor in deciding a
responce to a given input, it is not the only one.
Due to environmental effects, the responce is not
entirely predictable, and the model that it follows
cannot be defined explicitly due to the chaos
involved. This is not to say that the brain is not
mechanical in nature, as to say otherwise is to say
that the property of intelligence emergent due to
the physical brain itself, but rather something
non-physical.

>LUCAS:
>The complete rules will determine the operations
>completely at every stage; at every stage there will
>be a definite instruction, e.g., "If the number is
>prime and greater than two add one and divide by two:
>if it is not prime, divide by its smallest factor":
>we, however, will consider the possibility of there
>being alternative instructions, e.g., "In a fraction
>you may divide top and bottom by any number which is
>a factor of both numerator and denominator". In thus
>(114) relaxing the specification of our model, so
>that it is no longer completely determinist, though
>still entirely mechanistic, we shall be able to take
>into account a feature often proposed for mechanical
>models of the mind, namely that they should contain
>a randomizing device.

Bell:
Lucas acknowledges that there is an element of
chaos that aids the brain at a fundamental level.
However Lucas considers it to be as an extra sensory
input that determines which of a defined set of rules
that the brain must follow.

>LUCAS:
>But clearly in a machine a randomizing device could
>not be introduced to choose any alternative
>whatsoever: it can only be permitted to choose
>between a number of allowable alternatives.

Bell:
Lucas considers a machine to be a device that
implements high-level rules in a non-deterministic
manner. He compares this to a brain whose neurons
can be excited in a non-deterministic manner. A rule
based system is intended to model the outputted
responce of a brain, where each step corresponds
to an atomic decision. Unless we consider each
neuron firing to correspond to an atomic step in
a decision process, thereby assigning each neuron
a specific and labeled task domain, the models are
not equivilant. This cannot be done as the brain is
capable of solving an infinite set of probems.

>LUCAS:
>If there are only a definite number of types of
>operation and initial assumptions built into the
>system, we can represent them all by suitable symbols
>written down on paper. We can parallel the operation
>by rules ("rules of inference" or "axiom schemata")
>allowing us to go from one or more formulae (or even
>from no formula at all) to another formula, and we
>can parallel the initial assumptions (if any) by a
>set of initial formulae ("primitive propositions",
>"postulates" or "axioms"). Once we have represented
>these on paper, we can represent every single
>operation: all we need do is to give formulae
>representing the situation before and after the
>operation, and note which rule is being invoked.
>We can thus represent on paper any possible sequence
>of operations the machine might perform. However
>long, the machine went on operating, we could, give
>enough time, paper and patience, write down an
>analogue of the machine's operations.

Bell:
For a machine that is as massively parallel as the
brain, which has no global mechanism for keeping
the sections of its structure working to the same
pace or timescale this would be unfeasible.

>LUCAS:
>Thus, construing our rules as rules of inference,
>we shall have a proof-sequence of {47} formulae,
>each one being written down in virtue of some formal
>rule of inference having been applied to some
>previous formula or formulae (except, of course, for
>the initial formulae, which are given because they
>represent initial assumptions built into the system).

Bell:
This would only be true of the brain worked in a
deterministic manner, with the environmental
influence measurable. The formal definition could
be considered as an indication for probable
behaviour, but a state could never be inferred from
another with any certainty.

>LUCAS:
>We now construct a Goedelian formula in this formal
>system. This formula cannot be proved-in-the- system.
>Therefore the machine cannot produce the
>corresponding formula as being true. But we can see
>that the Goedelian formula is true: any rational
>being could follow Goedel's argument, and convince
>himself that the Goedelian formula, although
>unprovable-in-the-system, was nonetheless----in fact,
>for that very reason---true.

Bell:
This only has relevance if we consider the brain to
solve mathematically and subconsciously the Goedel
problem in order to be able to recognise it as being
a truth. Recognition of truth and being able to
logically prove it are not equivilant. We can
recognose a Goedel statement as true by following
the explaining arguement, and not by attempting
to prove the statement itself. In a logical sense,
we are reasoning with information about the
statement and not the statement itself.

>LUCAS:
>we can never not even in principle, have a mechanical
>model of the mind.

Bell:
If we consider the brain to be just an ordered
mixture of chemicals and electrons, all of which
follow physical laws, then their interaction can
be modelled mechanically. Environmental effects
could be simulated to give a possible brain that
would act with the same characteristics as all
others. To argue that this model would not display
the characteristic of intelligence is to say that
intelligence is not a result of the physical
nature of the brain.

>LUCAS:
>We are trying to produce a model of the mind which
>is mechanical---which is essentially "dead"---but
>the mind, being in fact "alive", can always go one
>better than any formal, ossified, dead, system can.

Bell:
This prompts the question of where the state which
gives the brain its 'life' resides, if it is not
within the physical universe.

>LUCAS:
>The mechanist has first turn. He produces a---any,
>but only a definite one---mechanical model of the
>mind. I point to something that it cannot do, but
>the mind can. The mechanist is free to modify his
>example, but each time he does so, I am entitled to
>look for defects in the revised model. If the
>mechanist can devise a model that I cannot find
>fault with, his [262] thesis is established: if he
>cannot, then it is not proven: and since---as it
>turns out-he necessarily cannot, it is refuted. To
>succeed, he must be able to produce some definite
>mechanical model of the mind---anyone he likes, but
>one he can specify, and will stick to.

Bell:
In my view a mechanical model of the structure of
a brain would suffice. I would argue that for a
system as massively complex as the brain,
functional equivilance could only be achieved by a
model that is based upon the brain.
An issue with this test for intelligence is that
two brains are not functionally identical. Even more
generally, inter-species comparisons could be made.
I doubt anyone would object to the statement that
mice display intelligence, but a mouse would not
pass this game of matching what the brains could
and could not do.

>LUCAS:
>In short, however a machine is designed, it must
>proceed either at random or according to definite
>rules. In so far as its procedure is random, we
>cannot outsmart it: (120) but its performance is not
>going to be a convincing parody of intelligent
>behaviour: in so far as its procedure is in
>accordance with definite rules, the Goedel method
>can {52} be used to produce a formula which the
>machine, according to those rules, cannot assert as
>true, although we, standing outside the system, can
>see it to be true.

Bell:
This is not true. The two options of being totally
random or totally described by definite rules does
not even cover the mechanisms of the brain at the
physical level. If we think of the rules as those
modelled by a strucure os a system then this is
false, as interference outside of the system
could change the state in a manner not described
by the rules. If we instead think of the rules as
the physical laws then we can think of the brain
and machine in the same domain; both being
mechanical with a random event being an
unpredictable but describable interaction.

>LUCAS:
>If a machine were wired to correspond to an
>inconsistent system, then there would be no
>well-formed formula which it could not produce
>as true; and so in no way could it be proved to be
>inferior to a human being. Nor could we make its
>inconsistency a reproach to it---are not men
>inconsistent too? Certainly women are, and
>politicians; and {53} even male non-politicians
>(121) contradict themselves sometimes, and a single
>inconsistency is enough to make a system inconsistent.

Bell:
A person giving a different responce to the same
question on two different occasions is not
evidence of inconsistency of the system. Consistency
applies to giving the same output for a given input
mapped through a given system state. For a system
in a real-world environment which interacts with and
is affected by the surroundings, an identical input
and internal would never occur, as features such as
memory will have altered. If this was not so, a
brain could not learn 1 + 1 = 2 if this was not the
original response. Consistency with regard to
input to output mapping is mutually exclusive to a
learning process.

>LUCAS:
>Human beings, although not perfectly consistent, are
>not so much inconsistent as fallible. A fallible but
>self-correcting machine would still be subject to
>Goedel's results.

Bell:
This is a very imprecise observation. A system is
either consistent where it adheres to a strict set
of rules, or it is inconsistent. There is no room
between the two states for descriptions such as
being 'fallible'.

>LUCAS:
>the mind does indeed try out dubious axioms and
>rules of inference; but if they are found to lead
>to contradiction, they are rejected altogether.

Bell:
I would argue that this process is not a result of
a fundamental nature of the brain, but instead a
result from having a strong incline to learn. People
often hang onto ideas even if it is proved in no
uncertain terms that the idea has no basis. It is
not as simple as a function of the brain
performing constant verification of logical rules.

>LUCAS:
>if we find that no system containing simple
>arithmetic can be free of contradictions, we shall
>have to abandon not merely the whole of mathematics
>and the mathematical sciences, but the whole of
>thought.

Bell:
Lucas' arguement has been that calculation and
thought are two seperate entities. This statement
prompts the question of is this a contradiction,
or os Lucas just attempting to make grand
observations.

>LUCAS:
>Goedel has shown that in a consistent system a
>formula (124) stating the consistency of the system
>cannot be proved in that system. It follows that a
>machine, if consistent, cannot produce as true an
>assertion of its own consistency: hence also that a
>mind, if it were really a machine, could not reach
>the conclusion that it was a consistent one ... we
>fairly say simply that we know we are consistent.

Bell:
Having a belief in something is not due to the mind
performing a proof of truth that is impossible to
replicate with calculation. This is ridiculus; people
belived that the world was flat.
It could be counter-argued that the brain could
maybe do this upon matters that concerned itself,
such as its own consistency, as it has access to
all the information required. For this I would
point to the contradiction of religious belifs
that people have concerning the narure of their
consciousness. If having a self-referential belief
was due to an uncalculable proof of truth then
there would be no contradiction between people,
and indeed in psychology.

>LUCAS:
>The paradoxes of consciousness arise because a
>conscious being can be aware of itself, as well as
>of other things, and yet cannot [269] really be
>construed as being divisible into parts.

Bell:
Self-consciousness is a characteristic of high-
intelligence, but not a pre-requisit of intelligent
behaviour itself. Human beings and some apes are the
only animals that are able to recognise their own
image in a mirror, and so it could be argues are the
only animals to be self-aware. This small set of
animals however are not the only to display signs
of intelligent behaviour.

>LUCAS:
>Complexity often does introduce qualitative
>differences. Although it sounds implausible, it
>might turn out that above a certain level of
>complexity, a machine ceased to be predictable, even
>in principle, and started doing things on its own
>account, or, to use a very revealing phrase, it might
>begin to have a mind of its own.

Bell:
This does not sound implausable. A machine that has
random events within from environmental interference
would show a more sparadic behaviour as its
complexity increased, as the effect of the interference
would grom exponentially in inter-dependant
sub-systems.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:19 BST