Re: Lucas, J. (1961) Minds, Machines and Goedel

From: Edwards, Dave (
Date: Mon Feb 21 2000 - 15:41:41 GMT

Minds, Machines and Goedel by J.R.Lucas

> Goedel's theorem seems to me to prove that Mechanism is false, that is, that
> minds cannot be explained as machines.

What Lucas is saying in this paper is that is it impossible - by Goedel's
theorem - to produce a model of the mind using purely mechanical methods, i.e.
a machine, that can not be intelligent, self-aware or conscious, and cannot be
more that the sum of it's parts - it cannot be mind-like (Lucas says this at
the end of the paper).

> Goedel's theorem states that in any consistent system which is strong enough to
> produce simple arithmetic there are formulae which cannot be proved-in-the-
> system, but which we can see to be true. Essentially, we consider the formula
> which says, in effect, "This formula is unprovable-in-the-system". If this
> formula were provable-in-the-system, we should have a contradiction: for if it
> were provable-in-the-system, then it would not be unprovable-in-the-system, so
> that "This formula is unprovable-in-the-system" would be false: equally, if it
> were provable-in-the-system, then it would not be false, but would be true,
> since in any consistent system nothing false can be proved-in-the-system, but
> only truths. So the formula "This formula is unprovable-in-the-system" is not
> provable-in-the-system, but unprovable-in-the-system. Further, if the formula
> "This formula is unprovable-in- the-system" is unprovable-in-the-system, then
> it is true that that formula is unprovable-in-the-system, that is, "This
> formula is unprovable-in-the-system" is true.

Basically, this says that there is a formula which if the system proved true,
is false, and if proved false, is true. I fully agree with this given the
system is consistent and not intelligent.

> Goedel's theorem must apply to cybernetical machines, because it is of the
> essence of being a machine, that it should be a concrete instantiation of a
> formal system. It follows that given any machine which is consistent and
> capable of doing simple arithmetic, there is a formula which it is incapable of
> producing as being true---i.e., the formula is unprovable-in-the-system-but
> which we can see to be true. It follows that no machine can be a complete or
> adequate model of the mind, that minds are essentially different from machines.

Given the limitations on Lucas's definition of a machine, he states that this
machine cannot prove a Goedelian formula in-the-system, but a human standing
outside the system, can. He does not even entertain the notion that a machine
standing outside the system - so it is not referring to itself, like the human
- may be able to prove it.

> Instead of considering what a completely determined machine must do, we shall
> consider what a machine might be able to do if it had a randomizing device that
> acted whenever there were two or more operations possible, none of which could
> lead to inconsistency.

A probability distribution would be the good method of choosing a possible

> The conclusions it is possible for the machine to produce as being true will
> therefore correspond to the theorems that can be proved in the corresponding
> formal system. We now construct a Goedelian formula in this formal system. This
> formula cannot be proved-in-the- system. Therefore the machine cannot produce
> the corresponding formula as being true. But we can see that the Goedelian
> formula is true. Now any mechanical model of the mind must include a mechanism
> which can enunciate truths of arithmetic, because this is something which minds
> can do. But in this one respect they cannot do so well: in that for every
> machine there is a truth which it cannot produce as being true, but which a
> mind can.

This is Lucas's main argument that a limited machine cannot be like mind. That
it can't prove the Goedelian formula, but a human OUTSIDE the system can.
Is there a Goedelian formula for the mind? One which a machine may be able to
prove as it stands outside the system?

> This shows that a machine cannot be a complete and adequate model of the mind.
> It cannot do everything that a mind can do, since however much it can do, there
> is always something which it cannot do, and a mind can.

This is true because Lucas has said that his definition of a machine cannot
have attributes that the mind has, and therefore he can say that a mind can
always do things a machine cannot do.

> This is not to say that we cannot build a machine to simulate any desired piece
> of mind-like behaviour: it is only that we cannot build a machine to simulate
> every piece of mind-like behaviour. We can (or shall be able to one day) build
> machines capable of reproducing bits of mind-like behaviour, and indeed of
> outdoing the performances of human minds: but however good the machine is, and
> however much better it can do in nearly all respects than a human mind can, it
> always has this one weakness, this one thing which it cannot do, whereas a mind
> can. The Goedelian formula is the Achilles' heel of the cybernetical machine.
> And therefore we cannot hope ever to produce a machine that will be able to do
> all that a mind can do: we can never not even in principle, have a mechanical
> model of the mind.

This is a contradiction. He states that we can reproduce any piece of mind-like
behaviour. If this is so, why not produce those parts that enable a mind to
solve any Goedelian formula (not every part of the mind may be needed for this)?

> This conclusion will be highly suspect to some people. They will object first
> that we cannot have it both that a machine can simulate any piece of mind-like
> behaviour, and that it cannot simulate every piece. [...] However complicated a
> machine we construct, it will, if it is a machine, correspond to a formal
> system, which in turn will be liable to the Goedel procedure for finding a
> formula unprovable-in-that- system. This formula the machine will be unable to
> produce as being true, although a mind can see that it is true. And so the
> machine will still not be an adequate model of the mind.

He states that however complicated we make a machine, constricted by the
limitations, it will never "be an adequate model of the mind."
I fully agree with this.

> We are trying to produce a model of the mind which is mechanical---which is
> essentially "dead"---but the mind, being in fact "alive", can always go one
> better than any formal, ossified, dead, system can. Thanks to Goedel's theorem,
> the mind always has the last word.

Here Lucas states by his own definition, that his machine can never be a model
of the mind. I do not understand the reasoning behind this. He is trying to
create a model of the mind without the fundamental functions of the mind;
without that which makes a mind, a mind.

> The mechanical model must be, in some sense, finite and definite: and then the
> mind can always go one better.

The human mind may be finite and definite, we just haven't found the limits yet.

> We are not discussing whether machines or minds are superior, but whether they
> are the same.

> In some respect machines are undoubtedly superior to human minds; and the
> question on which they are stumped is admittedly, a rather niggling, even
> trivial, question. But it is enough, enough to show that the machine is not the
> same as a mind.

Here he says that the smallest difference between a mind and a machine is
enough to show that they are not the same. Well, let the machine have the
attributes that a mind does, and maybe it will not be different.

> Deeper objections can still be made. Goedel's theorem applies to deductive
> systems, and human beings are not confined to making only deductive inferences.
> Goedel's theorem applies only to consistent systems, and one may have doubts
> about how far it is permissible to assume that human beings are consistent.
> Goedel's theorem applies only to formal systems, and there is no a priori bound
> to human ingenuity which rules out the possibility of our contriving some
> replica of humanity which was not representable by a formal system.

Here, Lucas almost admits that Goedel's theorem should not apply to a model of
the mind. He says that humans are not only deductive, consistent and formal
systems (to which Goedel's theorem applies), and so a model of the mind should
be also, and hence Goedel's theorem should not apply.

> Human beings are not confined to making deductive inferences, and it has been
> urged by C.G. Hempel and Hartley Rogers that a fair model of the mind would
> have to allow for the possibility of making non-deductive inferences, and these
> might provide a way of escaping the Goedel result.

As human's are not confined to making deductive inferences, what if the machine
doesn't either? Lucas shows that this method will produce not inconsistent
results, but wrong ones. Hence this method would not be an adequate model for
the mind.

> In short, however a machine is designed, it must proceed either at random or
> according to definite rules. In so far as its procedure is random, we cannot
> outsmart it: but its performance is not going to be a convincing parody of
> intelligent behaviour: in so far as its procedure is in accordance with
> definite rules, the Goedel method can be used to produce a formula which the
> machine, according to those rules, cannot assert as true, although we, standing
> outside the system, can see it to be true.

Again he makes the statement that a human standing outside the system can see
the Goedelian formula to be true, but the machine cannot. Could it be that
anything (human or machine) inside the system cannot see it to be true?

> Goedel's theorem applies only to consistent systems. All that we can prove
> formally is that if the system is consistent, then the Goedelian formula is
> unprovable-in-the-system. To be able to say categorically that the Goedelian
> formula is unprovable-in- the-system, and therefore true, we must not only be
> dealing with a consistent system, but be able to say that it is consistent.
> And, as Goedel showed in his second theorem---a corollary of his first---it is
> impossible to prove in a consistent system that that system is consistent.

Goedel's second theorem states that it is impossible to show that a system is
consistent from within the system. Therefore, Lucas states that the human and
the machine are assumed to be consistent, because they decide to be, i.e. that
any recognised inconsistencies will not be tolerated, and thus retracted.

> [...] are not men inconsistent too? Certainly women are, and politicians; and
> even male non-politicians contradict themselves sometimes, and a single
> inconsistency is enough to make a system inconsistent.


> Human beings, although not perfectly consistent, are not so much inconsistent
> as fallible.

> A fallible but self-correcting machine would still be subject to Goedel's
> results. Only a fundamentally inconsistent machine would escape. Could we have
> a fundamentally inconsistent, but at the same time self- correcting machine,
> which both would be free of Goedel's results and yet would not be trivial and
> entirely unlike a human being? A machine with a rather recherchť: inconsistency
> wired into it, so that for all normal purposes it was consistent, but when
> presented with the Goedelian sentence was able to prove it?

Lucas suggests that a fundamentally inconsistent machine would escape Goedel's
theorem. This in turn brings up difficulties. A fundamentally inconsistent, but
at the same time self-correcting machine is found to be unacceptable and hence
not an adequate model for the mind.

> We can see how we might almost have expected Goedel's theorem to distinguish
> self-conscious beings from inanimate objects. The essence of the Goedelian
> formula is that it is self-referring. It says that "This formula is unprovable-
> in-this-system". When carried over to a machine, the formula is specified in
> terms which depend on the particular machine in question. The machine is being
> asked a question about its own processes. We are asking it to be self-
> conscious, and say what things it can and cannot do. Such questions notoriously
> lead to paradox.

The Goedelian formula refers to itself. Lucas says that in order for a machine
to evaluate this, it must be self-conscious. Why does this have to be? Can the
machine not evaluate itself without being self-conscious? The machine could be
able to know what it is doing (evaluate itself), without being self-conscious
(knowing it is doing it).

> The paradoxes of consciousness arise because a conscious being can be aware of
> itself, as well as of other things, and yet cannot really be construed as being
> divisible into parts. It means that a conscious being can deal with Goedelian
> questions in a way in which a machine cannot, because a conscious being can
> both consider itself and its performance and yet not be other than that which
> did the performance. A machine can be made in a manner of speaking to
> "consider" its own performance, but it cannot take this "into account" without
> thereby becoming a different machine, namely the old machine with a "new part"
> added.

Lucas says that a machine cannot evaluate the part that is doing the
evaluating, and hence cannot consider it's own performance, and so cannot
answer the Goedelian formula.
A machine can assume that the part that is doing the evaluating has a certain
performance and can include that in the total evaluation. Therefore it can
evaluate it's own performance.

> From Turing's argument:
> So far, we have constructed only fairly simple and predictable artefacts. When
> we increase the complexity of our machines there may, perhaps, be surprises in
> store for us. He draws a parallel with a fission pile. Below a certain
> "critical" size, nothing much happens: but above the critical size, the sparks
> begin to fly. So too, perhaps, with brains and machines. [...] Turing is
> suggesting that it is only a matter of complexity, and that above a certain
> level of complexity a qualitative difference appears, so that "super-critical"
> machines will be quite unlike the simple ones hitherto envisaged.

Turing suggests that when a brain gets to a certain complexity, it becomes
greater that the sum of it's parts. Which I believe is possible.

> It would begin to have a mind of its own when it was no longer entirely
> predictable and entirely docile, but was capable of doing things which we
> recognized as intelligent, and not just mistakes or random shots, but which we
> had not programmed into it. But then it would cease to be a machine, within the
> meaning of the act.

Lucas says that when a machine has certain mind-like attributes it becomes a
mind. He also says that then it would not be a machine due to his limitations
imposed on being a machine.

> What is at stake in the mechanist debate is not how minds are, or might be,
> brought into being, but how they operate. It is essential for the mechanist
> thesis that the mechanical model of the mind shall operate according to
> "mechanical principles", that is, that we can understand the operation of the
> whole in terms of the operations of its parts, and the operation of each part
> either shall be determined by its initial state and the construction of the
> machine, or shall be a random choice between a determinate number of
> determinate operations. If the mechanist produces a machine which is so
> complicated that this ceases to hold good of it, then it is no longer a machine
> for the purposes of our discussion, no matter how it was constructed. We should
> say, rather, that he had created a mind.

It is a good theory, to be able to create a model of the mind where it is just
the sum of it^“s parts and we can see exactly what and how it works.
Unfortunately, it seems that we cannot do this, Lucas^“s argument admits this.
It seems that the mind is either very complicated, or it is greater that the
sum of it^“s parts.

> We should take care to stress that although what was created looked like a
> machine, it was not one really, because it was not just the total of its parts.
> One could not tell what it was going to do merely by knowing the way in which
> it was built up and the initial state of its parts: one could not even tell the
> limits of what it could do, for even when presented with a Goedel-type
> question, it got the answer right. In fact we should say briefly that any
> system which was not floored by the Goedel question was eo ipso not a Turing
> machine, i.e., not a machine within the meaning of the act.

If a machine were created thus, then it would be a model of the mind. Objective
achieved. Due to the limitations imposed by Lucas, this would not be a machine
by his definition.

Consider the following argument:
The universe and everything in it obeys a set of physical laws.
A computer can simulate physical laws.
A human brain (and hence the mind) obeys physical laws from above.
A computer can model the human brain and hence the mind.

If the mind does not obey physical laws, then there may be other laws it may
obey, which could be modelled.

I would like to hear any objections to this argument.

Edwards, Dave <>

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT