Lucas: Minds, Machines and Goedel

From: Clark Graham (ggc198@ecs.soton.ac.uk)
Date: Wed Feb 28 2001 - 12:56:08 GMT


http://cogprints.soton.ac.uk/documents/disk0/00/00/03/56/index.html

Clark:
Lucas' paper "Minds, Machines and Goedel" sets out to show that
because of Goedel's theorem (discussed below), minds cannot be
modelled as machines. This is another way of saying that machines
cannot "have" a mind, ie. they cannot be intelligent (in the way that
humans are). Lucas starts by introducing Goedel's theorem:

> LUCAS:
> Essentially, we consider the formula which says, in effect, "This
> formula is unprovable-in-the-system". If this formula were
> provable-in-the-system, we should have a contradiction: for if it
> were provable-in-the-system, then it would not be unprovable-in-
> the-system, so that "This formula is unprovable-in-the-system"
> would be false: equally, if it were provable-in-the-system, then it
> would not be false, but would be true, since in any consistent
> system nothing false can be proved in-the-system, but only truths.
> So the formula "This formula is unprovable-in-the-system" is not
> provable-in-the-system, but unprovable-in-the-system. Further, if
> the formula "This formula is unprovable-in-the-system" is
> unprovable-in-the-system, then it is true that that formula is
> unprovable-in-the-system, that is, "This formula is unprovable-in-
> the- system" is true.

Clark:
It is important to emphasise that the formula is provable / unprovable
INSIDE the system, even though this leads to a long-winded
description. This is because these formula must be seen to be true
from an outside-the-system perspective; if it was not, nothing would
be proven. There is no point in discussing a formula which says
"This formula is unprovable in the system" if we (outside the system)
do not know if it is true or false. If we didn't, the formula could in
fact be false, ie. it could be proven by the system, and it would then
be no different from any other undisputed formula in the system.

> LUCAS:
> We understand by a cybernetical machine an apparatus which performs
> a set of operations according to a definite set of rules.
> ...
> When we consider the possibility that the mind might be a
> cybernetical mechanism we have such a model in view; we suppose
> that the brain is composed of complicated neural circuits, and that
> the information fed in by the senses is "processed" and acted upon
> or stored for future use.
> ...
> Our idea of a machine is just this, that its behaviour is
> completely determined by the way it is made and the incoming
> "stimuli": there is no possibility of its acting on its own: given
> a certain form of construction and a certain input of information,
> then it must act in a certain specific way.

Clark:
This seems to be a "granny" style argument, and there are two counters
to it. Firstly, to clarify the argument: machines are deterministic
and completely predictable. Humans (or human minds) are not
predictable. Therefore machines cannot model human minds.

The first counter is that Lucas' statement also describes us (humans)
- we act according to what is happening around us ("inputs"), along
with what we can and cannot do (the way we are made, ie. physical /
mental limitations). Of course, Lucas might say that past experiences
also have a part to play in our actions, and do not in machines. There
are two counters to this: one, our past experiences have shaped us in
some way, so can be included in the "way that it is made" category of
behavioural determiners, or two, that machines can also base their
action on past experiences, for example by learning, such as
artificial neural networks do. If we are the same as machines in this
respect, there is little point in arguing about it.

If Lucas' statement does indeed describe humans as well as machines,
then there are two possibilities. One, "there is no possibility of
[the machine] acting on its own" (ie. machine's actions can always be
predicted) is false, because humans' actions cannot always predicted,
or two, that humans' actions can be predicted - we just don't know
enough about how the brain works to do so. Either way, humans and
machines would be alike in this respect, so perhaps a mind-modelling
machine would be possible.

The second counter to Lucas' argument is similar to the conclusion
from the first - it is impossible to compare machines and brains using
the conjunction of "the way [the machine] is made" and "the incoming
'stimuli'" as the antecedent to "there is no possibility of its acting
on its own". This is because "the way the machine is made" is
synonymous with "the way the machine is programmed", and we do not
know how the human brain is "programmed". If we did, then we could
conceivably predict every human behaviour, and this would make us no
different from Lucas' machines. If this were the case, machines could
model minds, because in respect to Lucas' argument, they would be the
same.

However, Lucas is more concerned with machines that have a less
definite rule-base - more open-ended rules for each state it is in:

> LUCAS:
> ...instead of considering the whole set of rules which together
> determine exactly what a machine will do in given circumstances, we
> shall consider only an outline of those rules, which will delimit
> the possible responses of the machine, but not completely. The
> complete rules will determine the operations completely at every
> stage; at every stage there will be a definite instruction, e.g.,
> "If the number is prime and greater than two add one and divide by
> two: if it is not prime, divide by its smallest factor": we,
> however, will consider the possibility of there being alternative
> instructions, e.g., "In a fraction you may divide top and bottom by
> any number which is a factor of both numerator and denominator".

Clark:
This seems to be allowing the machine to have an element of choice -
to bring it in line with how human brains seem to work, ie. non-
deterministically. However, Lucas chooses to implement this choice by
a randomising device in the machine. This is obviously different to
humans, as we make choices based on past experiences and reasoning -
either we have encountered the situation before, so we know the
correct action, or we can generalise similar past experiences to the
present situation to come up with an "educated guess" as to the
correct action. There are random activities that occur "in" humans - a
random neuron randomly firing in the brain, or a random event in the
outside world affecting us. To completely randomise choice, however,
seems to be taking the machine further away from modelling a mind
rather than closer.

> LUCAS:
> One could build a machine where the choice between a number of
> alternatives was settled by, say, the number of radium atoms to
> have disintegrated in a given container in the past half- minute.

Clark:
The type of randomising device suggested by Lucas is true random
number generation - the number of radium atoms to have radioactively
decayed in a container over a given period of time. This, unlike the
standard pseudo-random number generation, will make the machine
completely non-deterministic - there will be no way of predicting
which option it will take when faced with a choice.

Lucas also points out that the randomising device will not allow the
machine to choose any alternative - it will simply choose one option
from a list of alternatives, none of which will lead to inconsistency,
ie. "proving" something that is not true. In effect, this stops the
machine doing anything which is impossible. This is fair enough, as
human's can't do impossible things either.

> LUCAS:
> If such a machine were built to produce theorems about arithmetic
> (in many ways the simplest part of mathematics), it would have only
> a finite number of components, and so there would be only a finite
> number of types of operation it could do, and only a finite number
> of initial assumptions it could operate on. Indeed, we can go
> further, and say that there would only be a definite number of
> types of operation, and of initial assumptions, that could be built
> into it. Machines are definite: anything which was indefinite or
> infinite we should not count as a machine.

Clark:
It is unclear here whether Lucas would see the generalisation of this
as true: if a machine were built to model minds, it would still have
to have a finite number of components, therefore it could only do a
finite number of operations, and have only a finite number of initial
assumptions to operate on. It seems that humans are capable of an
infinite number of operations, so such a machine could not model a
mind.

However, humans are made from a finite number of components. If this
is true for both us and the machine, why should one be capable of
infinite operations and the other only a small finite set? On one
hand, perhaps humans are capable of only a finite number of
operations (just a very large number of them), so we'd just need to
build a machine with the same number of components as us in order to
model us. On the other hand, it would be easy to build a machine (or
write a program) which modified itself, and consequently could perform
an infinite number of operations. A simple example would be a machine
that added 1 to a number supplied to it. It could then modify itself
to add 2 to the number instead of one, then later to add 3, and so on.

> LUCAS:
> Now any mechanical model of the mind must include a mechanism which
> can enunciate truths of arithmetic, because this is something which
> minds can do ... But in this one respect they cannot do so well: in
> that for every machine there is a truth which it cannot produce as
> being true, but which a mind can. This shows that a machine cannot
> be a complete and adequate model of the mind. It cannot do
> everything that a mind can do, since however much it can do, there
> is always something which it cannot do, and a mind can. This is not
> to say that we cannot build a machine to simulate any desired piece
> of mind-like behaviour: it is only that we cannot build a machine
> to simulate every piece of mind-like behaviour.

Clark:
The first question to ask here is why must a mechanical model
explicitly be able to "enunciate truths of arithmetic"? Are we born
with this "skill", or do we learn it somehow? However, there must be
some things we are born capable of, and so one of these could easily
replace the example above.

Lucas brings in Goedel's theorem here: because there can always be a
formula in a machine which, although seen from outside (ie. by us) to
be true, cannot be proven true by the machine, machines cannot do
everything that minds can do. This may well be true, but what Lucas
does not look at is why this occurs. How is it that we see it to be
true and not the machine? Clearly we have something the machine does
not, and Lucas' argument turns into an instance of the symbol
grounding problem. If such a machine had some connection to the
"outside world" (such as we have) through sensorimotor inputs, then
surely it too would be able to see the formula as true? When a
Goedelian formula can be seen to be true from outside the system (by
us), it is only SEEN to be true - not proven to be true. Therefore, as
long as the machine can also see it to be true, it may well be a model
of a mind. On the other hand, Goedel's theorem applies to any formal
axiomatic system which can do arithmetic. If brains/minds are also
such a system, they must also have a Goedel formula, which would be
seen to be true by some other outside observer. If this is true, then
it would not matter that a machine modelling the mind had a Goedel
formula in it, as the mind it was modelling would have, too.

The last sentence of the quote can also be seen as a "granny"
argument: machines can built to simulate "any desired piece of mind-
like behaviour", but cannot be built to simulate every piece. Lucas
counters the obvious objection to this statement by using Goedel's
theorem:

> LUCAS:
> We can use the same analogy also against those who, finding a
> formula their first machine cannot produce as being true, concede
> that that machine is indeed inadequate, but thereupon seek to
> construct a second, more adequate, machine, in which the formula
> can be produced as being true. This they can indeed do: but then
> the second machine will have a Goedelian formula all of its own,
> constructed by applying Goedel's procedure to the formal system
> which represents its (the second machine's) own, enlarged, scheme
> of operations.

There are two ways of looking at this argument from Lucas. Firstly, he
says machines can simulate any one part of mind-like behaviour. This
implies that they have no inadequacies in simulating this - no Goedel
formula is present in the system. If this were true, modelling a mind
would simply be a matter of modelling various "pieces" separately,
then sticking all the results together. However, it is unlikely that
this is what Lucas meant. I think that he meant that when trying to
model a mind, some pieces will be modeled well (or be
indistinguishable from the real thing), but others will succumb to
Goedel's theorem. Clearly, we cannot then just stick other models onto
this first one, because Goedel's theorem says that the resultant will
also be flawed. Consider though, a machine that DID model the human
mind. It may still have a Goedel formula in its system, but so what?
If it is Turing indistinguishable from a real mind, all this shows is
that we probably have a Goedel theorem as well.

> LUCAS:
> We are trying to produce a model of the mind which is mechanical---
> which is essentially "dead"---but the mind, being in fact "alive",
> can always go one better than any formal, ossified, dead, system
> can. Thanks to Goedel's theorem, the mind always has the last word.

Clark:
Why does a model that is "mechanical" mean that it is "dead"? Lucas
says that the main difference between a machine modelling a mind and a
real mind is that the former is a "formal" and "consistent" system
("dead"), whereas the latter is not. And why would we want to produce
a "dead" model of the mind anyway? It would not be a true model of the
mind if it were not "alive" (whatever Lucas means by this) in the same
way as a real mind was.

The argument that machines are mechanical and minds are not is another
"granny" style argument. Brains are basically causal systems -
something happens, and neurons fire; this is turn creates another
effect, such as movement - they therefore can be considered to be
"mechanical". Lucas may also say that machines are formal systems,
whereas minds are not (although he does not say precisely what minds
ARE). I understand "formal systems" to mean those which follow a
rigorous set of rules based on a finite number of axioms. As we do not
know how the brain works, who is to say that the brain is not a formal
system as well? If it is, then a formal machine may well be able to
model it.

> LUCAS:
> We could construct a machine with the usual operations, and in
> addition an operation of going through the Goedel procedure, and
> then producing the conclusion of that procedure as being true; and
> then repeating the procedure, and so on, as often as required. This
> would correspond to having a system with an additional rule of
> inference which allowed one to add, as a theorem, the Goedelian
> formula of the rest of the formal system, and then the Goedelian
> formula of this new, strengthened formal system, and so on. It
> would be tantamount to adding. to the original formal system an
> infinite sequence of axioms, each the Goedelian formula of the
> system hitherto obtained.

Clark:
Here, Lucas puts forward his second objection to "machines can be made
to model minds". It seems like a very strange argument for Lucas to
use, as it is basically re-stating Goedel's theorem, and no-one (or
probably almost no-one) is in disagreement with Goedel. If a Goedel
formula is found within a system, and it is made an axiom in order to
get rid of it, another Goedel formula can be found in the new expanded
system. This obviously happens no matter how many times a Goedel
formula is made an axiom. We are now back to the point made earlier -
there therefore must be something that real minds have that is not
present in Lucas' models of minds, such as a connection to the world
outside the system. As Lucas seems to agree that this is the main
stumbling block for mind-modellers, perhaps this connection to the
world outside the system is important, even necessary. On the other
hand, minds may also have a Goedel formula, in which case it doesn't
matter if a machine trying to model us has one also.

> LUCAS:
> In short, however a machine is designed, it must proceed either at
> random or according to definite rules. In so far as its procedure
> is random, we cannot outsmart it: but its performance is not going
> to be a convincing parody of intelligent behaviour: in so far as
> its procedure is in accordance with definite rules, the Goedel
> method can be used to produce a formula which the machine,
> according to those rules, cannot assert as true, although we,
> standing outside the system, can see it to be true.

Clark:
It seems odd that these are the only two ways in which a machine must
proceed. Lucas does not have a convincing argument that humans or
minds are different from machines - humans are causal systems. We
certainly do not proceed at random, and we may or may not proceed
according to rules. If we do proceed by rules, then a rule-following
machine could model a mind. If we do not, then it seems that there
might be some other option for machine that Lucas has not considered.
For example, memories / past experiences / things we have learnt can
determine the way we proceed. This can be seen either as another rule
(ie. look for similar experiences and proceed accordingly), in which
case a rule-following machine could implement this, or something
different than rules. If the latter is the case, why can't a machine
implement this? It is a causal system, and if its internal symbols /
"memories" are grounded in the outside world (ie. it can understand
them in relation to other symbols it understands), there seems no
reason that a machine could not model a mind (in this respect at
least).

> LUCAS:
> If we really were inconsistent machines, we should remain content
> with our inconsistencies, and would happily affirm both halves of a
> contradiction. Moreover, we would be prepared to say absolutely
> anything---which we are not...This surely is a characteristic of
> the mental operations of human beings: they are selective: they do
> discriminate between favoured---true---and unfavoured---false---
> statements...Human beings, although not perfectly consistent, are
> not so much inconsistent as fallible.

> A fallible but self-correcting machine would still be subject to
> Goedel's results. Only a fundamentally inconsistent machine would
> escape.

Clark:
Lucas says here that minds are almost consistent, and certainly not
inconsistent. In fact, his description of minds here is quite
different to ones he has made previously - minds are (almost)
consistent systems, and are "selective", ie. they choose which action
to take from a list list of options. This seems similar to the rule-
following machines Lucas has previously distanced minds from.

A "fallible but self-correcting machine" would be possible, but
Goedel's theorem would still apply to it. Why, then, does Goedel's
theorem not apply to us? Lucas almost addresses this point later in
the paper:

> LUCAS:
> Goedel has shown that in a consistent system a formula stating the
> consistency of the system cannot be proved in that system. It
> follows that a machine, if consistent, cannot produce as true an
> assertion of its own consistency: hence also that a mind, if it
> were really a machine, could not reach the conclusion that it was a
> consistent one. For a mind which is not a machine no such
> conclusion follows. All that Goedel has proved is that a mind
> cannot produce a formal proof of the consistency of a formal system
> inside the system itself: but there is no objection to going
> outside the system and no objection to producing informal arguments
> for the consistency either of a formal system or of something less
> formal and less systematized. Such informal arguments will not be
> able to be completely formalized: but then the whole tenor of
> Goedel's results is that we ought not to ask, and cannot obtain,
> complete formalization.

Clark:
Lucas agrees that Goedel's theorem would not apply to a mind if it
were not a machine, but then goes on to argue that a mind cannot prove
its own consistency, but can look outside of the system in order to
come up with informal arguments for its consistency. This seems to
indicate that Goedel's theorem DOES apply to minds, and if it does,
the mind must be a machine of some sort. This mind which Goedel's
theorem applies to can look outside of the system, so why can't a
model of a mind? Humans are not completely formalised, and machines
attempting to model us would not be either.

It seems unlikely that any human mind could come to the conclusion
that it was consistent. Lucas himself does not believe in this
consistency. One could only go about "proving" their supposed
consistency by either knowing exactly how their mind worked, and from
this prove it beyond doubt, or by listing instances in their life when
they believed they were acting consistently. The former is impossible
at present, as we do not know how minds / brains work, and the latter
is not proof - just corroborating evidence, and a counter example to a
person's consistency could (in almost all cases) be easily thought of.

> LUCAS:
> The essence of the Goedelian formula is that it is self-referring.
> It says that "This formula is unprovable-in-this-system". When
> carried over to a machine, the formula is specified in terms which
> depend on the particular machine in question. The machine is being
> asked a question about its own processes. We are asking it to be
> self-conscious, and say what things it can and cannot do...the
> concept of a conscious being is, implicitly, realized to be
> different from that of an unconscious object.
> ...
> Although conscious beings have the power of going on, we do not
> wish to exhibit this simply as a succession of tasks they are able
> to perform, nor do we see the mind as an infinite sequence of
> selves and super-selves and super-superselves. Rather, we insist
> that a conscious being is a unity, and though we talk about parts
> of the mind, we do so only as a metaphor, and will not allow it to
> be taken literally.
> ...
> A machine can be made in a manner of speaking to "consider" its own
> performance, but it cannot take this "into account" without thereby
> becoming a different machine, namely the old machine with a "new
> part" added. But it is inherent in our idea of a conscious mind
> that it can reflect upon itself and criticize its own performances,
> and no extra part is required to do this: it is already complete,
> and has no Achilles' heel.

Clark:
Lucas now says that a machine with a Goedel formula present in it
cannot "see it to be true" (obviously it cannot be proven either, as
this is what Goedel's theorem states), because to do so would mean it
would have to ask itself questions about its own capabilities, and if
it could do this, it would be self-conscious. He then says that a
machine can take its own performance into account, but only by adding
an extra part. Presumably Lucas means that if this extra part were
added, then another part would be needed in order to "take into
account" the performance of the first new part, and so on, according
to Goedel's theorem. However, this again brings us to the question of
how we can consider our own performances without adding extra parts
onto ourselves indefinitely. According to Lucas, it is because we are
self-conscious, but he does not explain this further.

Lucas says that a machine can consider its own performance - to say
what it can and cannot do. This is true - a program could list off the
functions it could perform, and anything not in this list it could not
perform (at the time of asking, anywway). Whilst it would take an
infinitely long period of time to lst all the things it could not do,
the same would be true of humans, so there is little point in this
argument.

However, Lucas claims that a machine could take these capabilities
into account when looking at its performance, for example. If the
machine had some concept of the meaning of its functions, it could
perhaps take into account its limits when asked to perform a task.
Even if it did not, it would still implicitly take into account its
capabilities, as if it was asked to perform a task that it could not
do, it wouldn't do it - no deep soul-searching would be required.

To be self-conscious is to be aware of one's existence, and it seems
that this comes about through some (basic, even sub-conscious)
interaction with the world, for example, feedback through senses. If a
mind-modelling machine had such feedback, perhaps it could become
aware of its own existence in the world.

> LUCAS:
> ...[if] above a certain level of complexity, a machine ceased to be
> predictable, even in principle, and started [to do] things on its
> own account, or, to use a very revealing phrase, ... begin to
> have a mind of its own...But then it would cease to be a machine,
> within the meaning of the act...It is essential for the mechanist
> thesis that the mechanical model of the mind shall operate
> according to "mechanical principles", that is, that we can
> understand the operation of the whole in terms of the operations of
> its parts, and the operation of each part either shall be
> determined by its initial state and the construction of the
> machine, or shall be a random choice between a determinate number
> of determinate operations. If the mechanist produces a machine
> which is so complicated that this ceases to hold good of it, then
> it is no longer a machine for the purposes of our discussion, no
> matter how it was constructed.

Clark:
The general definition of a machine (or at least the one that I use)
is a causal system that follows the laws of physics. However, Lucas
seems to define a machine as something that is not greater than the
sum of its parts. A mechanist believes that organic and inorganic
"systems" operate in the same way, ie. a complete copy (in the generic
sense) of an organic system can be made with an inorganic system. If a
mechanist builds a system that completely models the human mind, it
will be inorganic, but for some reason will not be able to be
explained.

Lucas takes the basis of this argument from Turing, who said that
below a certain "critical" size, a mind-modeller will not be complete/
intelligent, but when the critical size is achieved, it will be. This
seems correct - if we were to build a mind-modelling machine that was
Turing-indistinguishable from a real mind, and started to take pieces
off it, there would come a point where the machine would no longer be
intelligent. Therefore, the same must be true when working in the
opposite direction. It would be almost useless if we built an
intelligent system, but could not explain how it worked - we would
only be slightly ahead of where we started (as we would have proved
that such a system could be built, but no more). But how would this
situation come about? By the above argument, it may be just a single
line of code that separates an intelligent system from one that is
not. Lucas does not explain how such a small addition would have such
a drastic effect on the system that it would render it completely
impossible to understand.

The argument that "a mind is greater than the sum of its parts,
whereas a machine is not" seems to be flawed. On one hand, we do not
fully understand all the component parts of a mind, and so cannot say
for certain that it is greater than the sum of these. One the other
hand, a few pieces of inanimate wood, doing nothing on their own, can
be assembled into a lever mechanism, capable of moving great weights
with comparitively little applied force. Could this simple machine be
considered to be greater than the sum of its parts, and if so, why
couldn't a more complex mind-modeller?

> LUCAS:
> In fact we should say briefly that any system which was not floored
> by the Goedel question was eo ipso not a Turing machine, i.e., not
> a machine within the meaning of the act.

Lucas seems to be saying that a system not overcome by Goedel's
theorem cannot be a Turing machine (and therefore cannot be
implemented as some kind of computer program) because machines are
overcome by Goedel's theorem. This seems true, but if minds can also
be overcome by Goedel's theorem, this does not matter. If they cannot,
then Lucas is right (to a degree), and something greater than a Turing
machine is needed to completely model the mind.

A Turing machine and a machine as I understand it are two completely
different entities - the first can perform any computational task (ie.
any program on a computer can be represented by a Turing machine), and
the second is any causal physical system. Whilst we (humans) may be
the latter, it seems that we are not the former. Not everything we do
is computation. Therefore it seems correct to say that we could not be
modelled by a Turing machine.

> LUCAS:
> If the proof of the falsity of mechanism is valid, it is of the
> greatest consequence for the whole of philosophy...If we were to
> be scientific, it seemed that we must look on human beings as
> determined automata, and not as autonomous moral agents; if we
> were to be moral, it seemed that we must deny science its due, set
> an arbitrary limit to its progress in understanding human
> neurophysiology, and take refuge in obscurantist mysticism...
> It will still be possible to produce mechanical models of the mind.
> Only, now we can see that no mechanical model will be completely
> adequate, nor any explanations in purely mechanist terms. We can
> produce models and explanations, and they will be illuminating:
> but, however far they go, there will always remain more to be said.
> There is no arbitrary bound to scientific enquiry: but no
> scientific enquiry can ever exhaust the infinite variety of the
> human mind.

Clark:
Just because the mind is capable of an infinite variety does not mean
it will take infinte scientific enquiry to explain it. Computers are
capable of an infinite number of tasks, and we can completely explain
them. The size of the sides of a square could be any one of an
infinite number of possibilities, yet it does not seem that there is
some vital piece of the puzzle that has been missed in the explanation
(or modelling) of them. If humans are "autonomous moral agents", it
might make us harder to understand than squares, but there is no
evidence to suggest it would be impossible. Morals are just a set of
general rules, which may be modified by individuals, and followed
almost consistently (certainly not completely inconsistently), just as
Lucas' machines do in their operation.

Even if all of Lucas' arguments were completely true, the paper would
not be a "proof of the falsity of mechanism" in the formal sense,
because it assumes that we know how the mind works, both in operation
and in terms of its constituent parts. We understand neither in full,
and although Lucas seems to show that it will probably take something
that cannot be described by a Turing machine to completely model a
mind, he is far from proving it.

Graham Clark.
_____________________________


http://www.ecs.soton.ac.uk/~ggc198



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:17 BST