From: Cove Stuart (smc198@ecs.soton.ac.uk)
Date: Thu Mar 01 2001 - 20:25:16 GMT
http://cogsci.soton.ac.uk/~harnad/Papers/Py104/dennett.rob.html
Cove:
Dennetts paper begins by outlining the feasibility of constructing a
conscious robot. Dennett then moves on to a description of the Cog
project at MIT, which is attempting to bestow some toy human abilities
upon a robot in an attempt to make it as conscious as possible.
>DENNETT:
>It is unlikely, in my opinion, that anyone will ever make a robot that is conscious in
>Just the way we human beings are.
I agree with this statement, but does this really matter? If we are
using Turings criterion as a guide then surely not. As long as the
function of the robot is indistinguishable from that of it's human
counterpart, can we simply assume this is good enough to claim it's
conscious?
Cove:
On the arguments against the possibility of a conscious robot, Dennets first
is easily discredited
>DENNETT:
>(1) Robots are purely material things, and consciousness requires immaterial mind-stuff.
>(Old-fashioned dualism)
Cove:
This view appears to me to be psuedo-science. Dennett correctly argues
that if supernatural forces had been accounted for explainable, causal
principles, scientific knowledge would not be as sophisticated as it
is.
The second argument is related to the way we define machines.
>DENNETT:
>(2) Robots are inorganic (by definition), and consciousness can exist only in an organic >>brain.
Cove:
Dennett raises the argument that vitalism is dead, and uses
biochemistry as an example to illustrate that organic compounds can be
reduced to mechanical principles, and reproduced another way.
>DENNETT:
>it is conceivable--if unlikely-- that the sheer speed and compactness of biochemically >engineered
>processes in the brain are in fact irreproducible in other physical media (Dennett, 1987).
Cove:
This is also a valid point, and Dennett goes on to suggest that a
bioengineered approach may also be a good way to create a more human
robot.
>DENNETT:
>if somebody were to invent some sort of cheap artificial neural network fabric that could usefully be
>spliced into various tight corners in a robot's control system, the embarrassing fact that this fabric
>was made of organic molecules would not and should not dissuade serious roboticists from using it--and
>simply taking on the burden of explaining to the uninitiated why this did not constitute "cheating" in
>any important sense.
Cove:
If I ignore some ethical questions raised by the use of organic
materials in an attempt to build a conscious robot, I do agree that
utilising natural mechanics is a good idea. Dennett doesn't really
justify why it isn't cheating to use such techniques, but I think it
would be if the causal principles of the materials used are not well
understood. If they aren't our task would be less like robotics, and
more like surgery. We would have reverse engineered nothing. Also, at
what point would the use of organic materials mean we ceased to have a
robot, but had an animal instead?
The third argument is also related to the way in which we define machines.
>DENNETT:
>(3) Robots are artefacts, and consciousness abhors an artefact; only something natural, born not
>manufactured, could exhibit genuine consciousness.
Cove:
Dennett uses examples of copies to argue his point.
>DENNETT:
>an atom-for-atom duplicate of a human being, an artefactual counterfeit of you, let us say, might
>not legally be you, and hence might not be entitled to your belongings, or deserve your punishments,
>but the suggestion that such a being would not be a feeling, conscious, alive person as genuine as any
>born of woman is preposterous nonsense
Cove:
I agree with this point, but as Dennett points out, a conscious may be
a task too complicated to handle with silicon and motors. It may also
be difficult to give it all of it's necessary knowledge. This is
something which all humans acquire through life, and I agree with
Dennett that if we are to avoid the frame problem and solve the
credit/blame assignment problem, a system that adaptively and
continually learns is imperative.
Dennett then uses the comparison of animating a live action film pixel
by pixel, and filming it in the more traditional way, to illuminate the
fine line between the feasibility of achieving complex tasks and the
practicalities of doing so. It is pointed out that Disney may once have
aspired to producing cartoons that were indistinguishable from
real-life, and although still not possible is feasible. Dennett goes
on to make a point about this, that I'm not sure I agree with.
>DENNETT:
>Perhaps no cartoon could be a great film, but they are certainly real films--and some are indeed good
>films; if the best the roboticists can hope for is the creation of some crude, cheesy, second-rate,
>artificial consciousness, they still win.
Cove:
My problem is whether consciousness is an all-or-nothing thing. If
Turings criterion is all we can use to determine the presence of
conscious, surely a second-rate kind is no kind at all. I'm not talking
about physical or mental disability, but the ability to pass TT3.
The final argument is the only one Dennett claims is truly defensible.
>DENNETT:
>(4) Robots will always just be much too simple to be conscious.
Cove:
This argument is concerned about the medium with which the robot is
constructed. Dennett disagrees with Searle, and others who claim that
some part of the brain could not be substituted and still be conscious,
and I do to.
>DENNETT:
>Artificial heart valves work really very well, but they are orders of magnitude simpler than organic
>heart valves, heart valves born of woman or sow, you might say.
Cove:
A good point. But if we build artificial brains they must be able to pass
TT3, to truly be intelligent.
Dennett suggests that the most interesting route of exploration, is
simply to make a theoretically interesting robot, that would be able to
perform a series of tasks we typically associate with consciousness.
>DENNETT:
>Maybe we could even learn something interesting about what the truly hard problems are without ever settling
>any of the issues about consciousness.
Cove:
Such a humanoid robot is being constructed at MIT. Cog is an adult
sized robot, has arms, a head, shoulders but no legs. All of these
things can move in three degrees of freedom, essentially as humans do.
The robot also has eyes and ears, which function to a fair level of
performance. The MIT website confirms Dennetts claim that Cog can
perform fairly sophisticated hand to eye coordination (playing with a
slinky, etc). Dennett points out that the team have compromised the
performance of the vision system, in the hope that the degraded
performance will not impair the robot to such an extent that the
problems encountered are too dissimilar to those encountered by
humans. I agree that in a conscious being, degrading the performance
of sensory perception does not make it any less conscious. However, as
this robot possesses only a toy subset of what comprises a human, it
seems unlikely that it will settle any issues about consciousness.
Dennett continues to describe how Cog will have face recognition
capacity, natural language processing abilities, and hard wired
responses like pain, and eye blinking.
>DENNETT:
>It has limit switches, heat sensors, current sensors, strain gauges and alarm signals in all the right
>places to prevent it from destroying its many motors and joints. It has enormous "funny bones"--motors
>sticking out from its elbows in a risky way. These will be protected from harm not by being shielded in
>heavy armor, but by being equipped with patches of exquisitely sensitive piezo-electric membrane "skin"
>which will trigger alarms when they make contact with anything. The goal is that Cog will quickly "learn"
>to keep its funny bones from being bumped--if Cog cannot learn this in short order, it will have to have
>this high-priority policy hard-wired in.
Cove:
All this sounds really impressive, but will the robot actually feel
pain? When the alarms sound, will the robot be in agony or just appear
to be? If it isn't in the case in this toy example (which I think it is
not), would it be if we scaled up the example, integrating it with
others that all appear functionally indistigui- shable from our own
corresponding attributes?
Dennett discusses the importance of the nature/nurture aspect to
learning, and how Cog will pass through an extended period of
artificial infancy, learning from its experiences. He also says a
parental role will be established between Cog and any number of people,
in an attempt to develop meaningful relationships. As Dennett points
out this is a good way of dealing with the complexity of attempting to
give Cog all it's knowledge (frame problem). He does not mention the
distinction between Cogs infancy and adulthood, but I assume it will be
when Cogs toy skills reach a roughly equivalent level to that of a
human adult.
>DENNETT:
>How plausible is the hope that Cog can retrace the steps of millions of years of evolution in a few months
>or years of laboratory exploration?... The acquired design innovations of Cog-I can be immediately transferred
>to Cog-II, a speed-up of evolution of tremendous, if incalculable, magnitude.
>Moreover, if you bear in mind that, unlike the natural case, there will be a team of overseers ready to make
>patches whenever obvious shortcomings reveal themselves, and to jog the systems out of ruts whenever they
>enter them, it is not so outrageous a hope, in our opinion.
Cove:
Are they really emulating evolution? The team are able decide which
behaviours 'learnt' by Cog are the fittest to proceed to the next
generation, but will these kind of direct changes encourage conscious
behaviour? Evolution has worked in strange ways to mould our
intelligent behaviour, so will our ideas about fitness take us on a
path away from how it formed our conscious mind?
Dennett then discusses more about the natural language capabilities
they hope to give Cog. Then moves on to Cogs more general behaviour.
>DENNETT:
>...this will not work unless the team manages somehow to give Cog a motivational structure that can be at least
>dimly recognized, responded to, and exploited by naive observers. In short, Cog should be as human as possible in
>its wants and fears, likes and dislikes.
Cove:
A conscious robot should definitely be as human as possible, but if the
team define the motivation behind Cogs behaviour surely the robot will
just appear motivated to the naive observer. Like Cogs response to
pain, it wont actually feel anything.
The next area of discussion is Cogs technical construction. It utilises
a parallel architecture, which consists of 64 independent nodes, each
running Lisp. This language is commonly used for genetic algorithms and
parallel processing systems. Each node is essentially a Mac, with
another used to monitor and debug Cogs behaviour. Dennett explains how
this architecture allows them to create a number of virtual machines,
capable of executing a number of different tasks. At start-up the
nodes load their 'short term memory' from files stored on a file
server. This server is supposed to be analogous to long term memory,
but I'm not sure.
>DENNETT:
>There is a big wager being made: the parallelism made possible by this arrangement will be sufficient to provide
>real-time control of importantly humanoid activities occurring on a human time scale. If this proves to be too
>optimistic by as little as an order of magnitude, the whole project will be forlorn, for the motivating insight
>for the project is that by confronting and solving actual, real time problems of self-protection, hand- eye
>coordination, and interaction with other animate beings, Cog's artificers will discover the sufficient conditions
>for higher cognitive functions in general--and maybe even for a variety of consciousness that would satisfy the
>skeptics.
Cove:
I agree that a conscious robot would have to perform on a human time
scale. However, even if Cog is able to perform Its subset of human
skills in a comparable time, I still don't think it's conscious. As I
mentioned in my last point, the robot can be shut down, and booted up,
with its current mode of execution being Loaded from files stored on a
server. Humans never go off-line, and the information about how to
perform our skills Is implicit and persistent, Cog will not know how to
perform certain skills unless they are loaded into its brain by someone
other than Cog. The variety of consciousness Dennett hopes will satisfy
sceptics seems unlikely to appear in Cog because of its lack of self
motivation.
Dennett then illustrates the importance of embodiment to consciousness.
>DENNETT:
>So it is something of a surprise to find this AI group conceding, in effect, that there is indeed something to the
>sceptics' claim (e.g., Dreyfus and Dreyfus, 1986) that genuine embodiment in a real world is crucial to consciousness.
>Not, I hasten to add, because genuine embodiment provides some special vital juice that mere virtual- world simulations
>cannot secrete, but for the more practical reason--or hunch--that unless you saddle yourself with all the problems of
>making a concrete agent take care of itself in the real world, you will tend to overlook, underestimate, or misconstrue
>the deepest problems of design.
Cove:
I completely agree with this, this is TT3. And in the last sentence
Dennett illustrates precisely why Cog probably wont exhibit
intelligence in a way that will satisfy sceptics.
Finally, Dennett discusses three philosophical themes.
The first is the symbol-grounding problem.
>DENNETT:
>Anything in Cog that might be a candidate for symbolhood will automatically be "grounded" in Cog's real predicament, as
>surely as its counterpart in any child, so the issue doesn't arise
Cove:
This isn't strictly true. The predicament that Cog is in requires it to
'behave' a certain way. The predicaments are very real, and so are the
responses from Cog. But the understanding of what its doing doesn't
have to be real.
The next theme is similar, and relates to the notion that nothing
really matters to an artificial intelligence. Dennett argues the
simplification point again. He feels that by giving Cog built in
preferences, it will have a kind of one dimensional appreciation of
pleasure and pain. I still think its all or nothing, and Cogs welfare
and preferences should be the complete responsibility of the robot (TT3
AGAIN).
The final theme is one presented by Lucas, and concerns the self
awareness of conscious robots. Even if we are able to monitor
accurately what is happening inside the machine, if the robot is able
to communicate with us in a self referential way, will it be a better
source of knowledge about what it's doing and why? I agree with
Dennett, that it would. I think that this is one of the main goals of
AI, and although the robot may tell us lies, or give us incorrect
information for some other reason, we find this in human behaviour, so
it is reasonable to assume that if we build a TT3 passing robot, it
will be unpredictable too.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:19 BST