Re: Dennett: Making Conscious Robots

From: Henderson Ian (
Date: Thu May 24 2001 - 13:14:37 BST

In reply to: McIntosh Chris: "Re: Dennett: Making Conscious Robots"

> McIntosh:
> Unfortunately Dennett doesn't give a clear definition of "robot".
> The dictionary says that a robot is a mechanical device ie. one
> whose behaviour can be fully explained by the laws of physics.
> But then what about the suggestion that anything that obeys these
> laws can be computationally simulated. How could a simulation
> have pain or any other conscious experience, and hence how if the
> suggestion is right could we have a conscious robot (or any other
> consciousness)?

I am also unsure about the definition of 'robot'. Most definitions I can
think of are not at all helpful from a scientific point of view, such as
'something created artificially' (origin-chauvinism), or something lacking a
'spark of consciousness' (duality) which seems unlikely, as both humans and
robots seem to obey laws of physics, and are both formed of the same kinds
of subatomic particles. Consider a 'robot' that is so alike to us both in
behaviour and even appearance that we cannot tell it apart from a human even
at the T4 or T5 level: what scientific meaning does the term 'robot' have
then? Maybe the term can only have meaning in the context of its external
historical origin, like Dennett's Cherokee Indian, since internally it is
just the same as any human being: the only difference is that the 'robot'
was engineered -- made by a non-reproductive method. Consequently, the term
'robot' has little scientific meaning in my opinion, and is just a
convenient term for describing a machine made in a certain way. At the end
of the day, it seems that we are 'just' physical mechanisms too (albeit
extraordinarily complex ones).

>> (1) Robots are purely material things, and consciousness requires
>> immaterial mind-stuff. (Old-fashioned dualism)
>> over the centuries, every other phenomenon of initially "supernatural"
>> mysteriousness has succumbed to an uncontroversial explanation within the
>> commodious folds of physical science... magnetism is one of the best
>> understood of physical phenomena, strange though its manifestations are.
>> The "miracles" of life itself, and of reproduction, are now analyzed into
>> the well-known intricacies of molecular biology. Why should consciousness
>> be any exception? .. Why should the brain be the only complex physical
>> object in the universe to have an interface with another realm of being?

> McIntosh:
> Consciousness has always been a slightly different
> puzzle to the likes of gravity and magnetism, as I doubt these
> were ever attributed to another realm.

Even in his paper, Dennett gives such an example: Thales believed that the
loadstone had a soul because of its ability to move iron. The Greek
philosophers came up with many other zany ideas by today's standards about
how the world works (Thales also believed that everything was made of water
for example), and these ideas were not even necessarily inspired by
religious belief. Almost everything that scientific investigation has
exposed the workings of *thus far* has probably at one time or another been
attributed to superstitious and religious beliefs. As Dennett reasonably
argues, there are no grounds for supposing that the mind and consciousness
are not amenable to scientific investigation either. The problem for
cognitive science is one of method: *how* to investigate, rather than
whether to investigate the mind.

>> but it is conceivable--if unlikely-- that the sheer speed and compactness
>> of biochemically engineered processes in the brain are in fact
>> unreproducible in other physical media

> McIntosh:
> If these biochemically engineered processes are
> computational then they are not only reproducible, but
> reproducible at far greater speeds, by computers. But the brain
> does more than symbol manipulation and its additional powers may
> be unreproducible in some other physical media, irrespective of
> speed and compactness.

We don't *know* for sure that the brain does do more than symbol
manipulation, to my knowledge: the conjecture of the computationalists is
that it doesn't.

>> There might, however, be a question of practicality. We have just seen
>> how, as a matter of exigent practicality, it could turn out after all
>> that organic materials were needed to make a conscious robot. For similar
>> reasons, it could turn out that any conscious robot had to be, if not
>> born, at least the beneficiary of a longish period of infancy. Making a
>> fully-equipped conscious adult robot might just be too much work. It
>> might be vastly easier to make an initially unconscious or
>> nonconscious "infant" robot and let it "grow up" into consciousness,
>> more or less the way we all do... a certain sort of process is the only
>> practical way of designing all the things that need designing in a
>> conscious being.

> McIntosh:
> It's hard to see how growth could make matters
> significantly easier for the designer. Dennett must still make
> plans for his adult Cog but will also need to overcome the
> extremely complicated growth process. Understanding how to
> introduce the processes in the brain that cause consciousness
> must be an important starting point.

Growth -- development through learning -- seems a reasonable way to
investigate intelligence using Cog in my opinion, as the alternative of
hard-coding in functionality which we might only speculate is necessary to
carry out certain intelligent tasks may lead to Cog producing results that
are of little use at all with respect to the understanding of human
intelligence. As Dennett says, cognitive scientists have little idea about
how the brain works, which things are important and which are not, and
starting Cog off with as few parameters as possible (based on things we both
understand the functioning of, and know are innate to human children) will
minimise the possibility of Cog going down the wrong path, and not producing
any insight into human intelligence at all. However, there is one important
caveat to Dennett's approach: the more Cog is left to its own devices, the
harder it will be to work out how Cog is doing what it is doing. After all,
this is the main aim of the experiment: using Cog to gain insight into *how*
the mind works.

This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST