Re: Lucas, J. (1961) Minds, Machines and Goedel

From: Boardman, Adam (afjb196@ecs.soton.ac.uk)
Date: Thu Feb 24 2000 - 14:14:25 GMT


http://cogprints.soton.ac.uk/abs/phil/199807022

> Grady:
> In this article LUCUS proposes Goedel's theorem disproves
> Mechanism (minds can be explained as machines).
>
> >LUCAS:
> >"This formula is unprovable-in-the-system" would be false:
> >equally, if it were provable-in-the-system, then it would
> >not be false, but would be true, since in any consistent
> >system nothing false can be proved-in-the-system, but only
> >truths.
>
> Grady:
> LUCAS explains how Goedel claims that in any consistent
> system there are always going to be unprovable statements
> which we can seen with our human mind to be true.

For me the consistency part is most problematic, what is a consistent
system, one that gives the same answer each time its asked the same
question. Certainly my mind doesn't do this, it attempts to give the
same concept but will pick seemingly randomly different words each
time. So why should we be concerned with Goedel's theorem, his theorem
wouldn't apply to an attempt to explain the mind as a machine.

To attempt to explain the mind as a machine would entail simulating
parts of the brain, which LUCAS claims can be done. The key will be in
finding the method of neural learning, attached with the random noisy
inputs from interactions with a natural environment. In a way the
simulation of the brain is to lose the usefulness of computers,
currently they are good at things that humans aren't, and vice versa.

> Grady:
> LUCAS here pins his whole argument on the prophecy that
> man will never be able to 'Goedel'. What if this assumption
> proves to be false. Given math's incompleteness it must
> have been conceivable to him one day the Goedel algorithm
> would be born.

Seeing that we humans aren't particularly consistent systems, and that
without basic training quite unable to do simple arithmetic and
comprehend formula. It seems to me to be highly unlikely that we would
be appropriate to be 'Goedel'd', by which I take Grady's meaning to be
that there is a formula that is unprovable in our system.

> Grady:
> However it seems
> to me that one machine could resolve such a statement on
> another. Would it be possible for 2 machines in parallel to
> Goedel. And could this be a simplified explanation of the
> mind's Goedel algorithm?

Perhaps your right here but not two machines in parallel but one larger
one considering the algorithm being run on the smaller one, like the
human setting up a logic system to evaluate the formula.

Minds don't tend to consider themselves systems, when asked to
interpret the sentence "This formula is unprovable-in-the-system", a
human will set up a logic system inside the brain in which to evaluate
the logic, recursive inconsistency's would be noticed, and since the
formula states that is unprovable, and it is unprovable, it is
considered true, not an easy concept for even the human mind to
comprehend. As LUCAS himself states: "The foregoing argument is very
fiddling, and difficult to grasp fully".

> >LUCAS:
> >Machines are definite: anything which was indefinite or
> >infinite we [258] should not count as a machine.

So if a machine was able to simulate a mind, it would cease to be a
machine. Say a machine is moveable, say in infinite space, it could
then store information by changing patterns in physical material for
which there is an 'infinite' supply. Since the machine has expanded its
storage capacity to infinity it stops being a machine?

> Grady:
> Essentially, isn't it arrogant or naive to assume
> that simulation of life is actually creation.

If the simulation is complete and in real-time (at the speed of the
original) then there is no reason to suppose that it is not alive,
thought that would require the passing of the Turing test.

> Grady:
> The mind it seems will always have the last word as the
> machine is always limited by what is definite. Any definite
> machine is vulnerable to being out-Goedeled.

Yes, but why should a machine designed for mind simulation have to be
definite, its back to the consistency arguments again.

> Grady:
> (with respect to differences between mind and machine)
> Notable other differences might be.. miscalculation, guessing
> (a uniquely human version of randomly choosing) and
> imperfection

Machines aren't infallible, if fed garbage they will only return
garbage. Though perhaps this is considered the humans fault to feed
incorrect/misleading information.

> >LUCAS:
> >If the mechanist produces a machine which is so complicated
> >that this ceases to hold good of it, then it is no longer a
> >machine for the purposes of our discussion, no matter how it
> >was constructed. We should say, rather, that he had created
> >a mind, in the same sort of sense as we procreate people at
> >present.
>
> Grady:
> Lucas does seem to jump the gun here. OK we have some kind of
> super-machine but LUCAS said earlier that it could be an
> adequate simulation of a mind only if it could do everything a
> mind can do. LUCAS has no real idea of what this super-machine
> could or couldn't do so it seems a little premature to suggest
> it could be some kind of procreated mind.

Presumably he's thinking of genetic algorithms or some such means by
which a machine constructs random copies of itself with minor changes,
each of these random changes is evaluated by an external entity (the
environment, natural selection of the fittest, etc) and those that
survive are then aloud to mutate copies. This is then considered by
LUCAS to not be a machine, what is it then?

> Grady:
> The idea of critical complexity may well hold water however
> it too seems a little abstract. During the Enlightenment
> man sought to understand the gaps he had once filled with God,
> is science here guilty of the same sin?

Probably, but is it a sin? what's wrong with admitting that we don't
yet understand something, and so describe it in the most logical way we
can comprehend at the time. I suggest that it only becomes a sin when
we use it to damage other people, or have wars over it.

Boardman, Adam <afjb196@ecs.soton.ac.uk>



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:26 GMT