Re: Dennett: Making a Conscious Robot

From: HARNAD, Stevan (
Date: Thu May 04 2000 - 18:27:05 BST

On Thu, 4 May 2000, Edwards, Dave wrote:

> robots could have a different kind of consciousness (perhaps a hive mind
> for a network?).

It's not about what what kind of consciousness a robot has, it's about
whether it has any kind at all. (And what are robots? cf. What are
machines? If a robot is just a causal system, then we are robots too.
"Man-made" is an arbitrary detail, compared to other matters, such as T3

> It should only be a matter of time until we can
> explain this, so far, unexplainable phenomenon of consciousness.

Maybe, though, all we will be able to explain is what it takes to have

>Is a robot with "muscles" instead of motors a robot within the meaning of the
>act? If muscles are allowed, what about lining the robot's artificial retinas
>with genuine organic rods and cones instead of relying on relatively clumsy
>color-tv technology?

> I believe that a line must be drawn between human consciousness and any other
> kind. Is a manufactured robot that is identical to a human, a human? Or a
> robot? How can you tell, if it's identical? If we do draw a line somewhere,
> do we get a new discrimination, consciousness-ism? If no line is drawn,
> all types of consciousness must be equal, is this true?

Well, we've already de facto declared that nonhuman animal consciousness
is worth less than human ('cause we it 'em). But apart from that, what's
at stake in drawing lines among kinds of consciousnesses? Besides, if a
robot's T3, how would you know it had a different "kind" of
consciousness any better than you could know whether it has any kind of
consciousness at all?

> If the robot is too simple, make it more complex. No one said it has to be a
> certain size. Surely, with no size limit and molecular robotics,
> a sufficiently complex machine can be built.

A body or brain the size of Southampton could be a handicap for
T3-testing -- and T3-passing...

> It would be breakthrough enough to get Cog to have a natural language
> conversation with a human, let alone all the other things they hope to do.

That would just be T2, though. (Searle and the symbol-grounding problem

> How much computation is necessary for these simple operations? Can you
> find out the sufficient conditions for higher cognitive functions if
> Cog cannot perform them, because he is too simple?

In other words, what can one conclude for (t3 < T3)?

> I agree that to create an artificial consciousness by interacting and
> learning with/from its environment, it is easier and probably more
> reliable to create a robot than a computer simulation. This is due to
> the necessary simplification of a computer simulation, which may miss
> some vital component. The computing power necessary to accurately model
> a robot and its environment is far beyond our current processing power.
> Building a robot, however, is not.

Is it just a practical matter, or was there also something about T3 vs.
T2 and just squiggling?

> Could Cog get to a point where it is not acceptable to turn him off? Where he
> might protest, and claim the rights of any conscious being?

Easier to get to the (toy) point where he protests than to get to T3
(where that's the least of the ways in which it is impossible to tell
him apart from the rest of us).

> But does Cog understand what Chicago is? Or is its explanation or
> meaning a group of squiggles to be called upon when needed? Does it
> really matter, if Cog can perform and interact as well as expected?

Depends on whether your Project was to build a useful device or to
reverse-engineer the mind.

> Does a child know what Chicago is? Or does it just remember what it has been
> told?

Close enough; there's someone home in the child, feeling...

> > The reasons for saying that something does matter to Cog are not
> > arbitrary; they are exactly parallel to the reasons we give for saying
> > that things matter to us and to other creatures.
> Things matter to us, principally, because it is beneficial to our
> welfare, such as eating, sleeping, friends, money, etc. There are
> things that matter to Cog as well, because Cog has been told to protect
> itself, such as not damaging itself.

Isn't the difference between real "mattering" and merely as-if
"mattering" that it FEELS-LIKE something for things to really matter. So
it's all down to whether or not Cog feels; it's not to do with his
bodily welfare. (A TV does not care about its welfare.)

> The obvious problem may arise from just believing Cog's own words. He
> may learn to lie, probably from us, just like a child. Unfortunately,
> we will have little choice but to believe him, as he gets more and more
> complex.

The real problem is T2 (symbols only; squiggles). And the solution is
not just any old complexity, but T3-power.


This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT