Daniel Dennett: The Practical Requirements for Making a Conscious Robot
In this paper the author, Daniel Dennett, deals with robots from three
different points of view.
First he arises the question if conscious robots are possible 'in principle',
then he talks about a contemporary project to build a humanoid robot. In the
last section he discusses three philosophical problems concerning artificial
intelligence and consciousness of robots.
Before he starts going into detail, Dennett gives us a short introduction how
he is concerned with robots and what the basic aims of the humanoid robot
project are.
> DENNETT:
> A team at MIT of which I am a part is now embarking on a
> longterm project to design and build a humanoid robot, Cog, whose cognitive
> talents will include speech, eye-coordinated manipulation of objects, and a
> host of self- protective, self-regulatory and self-exploring activities. The
> aim of the project is not to make a conscious robot, but to make a robot
> that can interact with human beings in a robust and versatile manner in real
> time, take care of itself, and tell its designers things about itself that
> would otherwise be extremely difficult if not impossible to determine by
> examination. Many of the details of Cog's "neural" organization will
> parallel what is known (or presumed known) about their counterparts in the
> human brain, but the intended realism of Cog as a model is relatively
> coarse-grained, varying opportunistically as a function of what we think we
> know, what we think we can build, and what we think doesn't matter. Much of
> what we think will of course prove to be mistaken; that is one advantage of
> real experiments over thought experiments.
Dennett points out that the goal is set pretty low, because the project is not
about creating a machine that would be able to pass a T2 - or even higher -
Turing test. The aim is still complicated enough, though, and Dennett hopes
that there are still plenty of things which can be learned by doing this project
and that this might at least lead into the direction of a conscious robot.
> 1. ARE CONSCIOUS ROBOTS POSSIBLE "IN PRINCIPLE"?
> DENNETT:
> It is unlikely, in my opinion, that anyone will ever make a robot that is
> conscious in just the way we human beings are.
Dennett thinks of two elementary reasons why building a conscious robot might
be impossible:
> DENNETT:
> They might be
> deep--conscious robots are in some way "impossible in principle"--or they
> might be trivial--for instance, conscious robots might simply cost too much
> to make.
Relating to the first reason there might be a problem of our concept how we
try to create consciousness:
> DENNETT:
> Might a conscious
> robot be "just" a stupendous assembly of more elementary artifacts--silicon
> chips, wires, tiny motors and cameras--or would any such assembly, of
> whatever size and sophistication, have to leave out some special ingredient
> that is requisite for consciousness?
This leads us to the first statement in a series of reasons why it might be
impossible to create artificial consciousness:
> DENNETT:
> (1) Robots are purely material things, and consciousness requires immaterial
> mind-stuff. (Old-fashioned dualism)
Dennett thinks that this is a very old-fashioned opinion and that there is no
reason why this should be true, especially if you look at this statement from
the historic point of view:
> DENNETT:
> over the centuries, every other phenomenon of initially
> "supernatural" mysteriousness has succumbed to an uncontroversial
> explanation within the commodious folds of physical science. [...]
> Why should the brain be
> the only complex physical object in the universe to have an interface with
> another realm of being? [...]
> DENNETT:
> The phenomena of consciousness are an admittedly
> dazzling lot, but I suspect that dualism would never be seriously considered
> if there weren't such a strong undercurrent of desire to protect the mind
> from science, by supposing it composed of a stuff that is in principle
> uninvestigatable by the methods of the physical sciences.
This is a quite interesting statement. Especially in former days people tried
to explain something they were not able to understand by giving it some mystical
or religious attitudes.
> DENNETT:
> But if you are willing to concede the hopelessness of dualism, and accept
> some version of materialism, you might still hold:
>
> (2) Robots are inorganic (by definition), and consciousness can exist only
> in an organic brain.
Dennett disproves this 2nd statement by referring to the latest development in
science, which makes us able to understand and explain the functionality of
every cell of our body:
> DENNETT:
> as biochemistry has shown in matchless
> detail, the powers of organic compounds are themselves all mechanistically
> reducible and hence mechanistically reproducible at one scale or another in
> alternative physical media; but it is conceivable [...] that the
> sheer speed and compactness of biochemically engineered processes in the
> brain are in fact unreproducible in other physical media (Dennett, 1987). So
> there might be straightforward reasons of engineering that showed that any
> robot that could not make use of organic tissues of one sort or another
> within its fabric would be too ungainly to execute some task critical for
> consciousness.
Though by definition a robot is only made of non organic materials, Dennett
suggests that inventions of organic parts should be used anyway:
> DENNETT:
> The standard understanding that a robot shall be made of
> metal, silicon chips, glass, plastic, rubber and such, is an expression of
> the willingness of theorists to bet on a simplification of the issues: their
> conviction is that the crucial functions of intelligence can be achieved by
> one high-level simulation or another, so that it would be no undue hardship
> to restrict themselves to these materials [...]
> But if somebody were to invent some sort
> of cheap artificial neural network fabric that could usefully be spliced
> into various tight corners in a robot's control system, the embarrassing
> fact that this fabric was made of organic molecules would not and should not
> dissuade serious roboticists from using it [...].
Then Dennett comes to the third reason one could believe why artificial
consciousness is impossible:
> DENNETT:
> (3) Robots are artifacts, and consciousness abhors an artifact; only
> something natural, born not manufactured, could exhibit genuine
> consciousness.
Dennett argues that this point of view is wrong, because the origin of something
or somebody doesn't effect its quality. He uses imitated wine and paintings as
well as different ethnic groups as examples:
> DENNETT:
> Let us dub
> origin chauvinism the category of view that holds out for some mystic
> difference (a difference of value, typically) due simply to such a fact
> about origin. Perfect imitation Chateau Plonque is exactly as good a wine as
> the real thing, counterfeit though it is, and the same holds for the fake
> Cezanne, if it is really indistinguishable by experts. And of course no
> person is intrinsically better or worse in any regard just for having or not
> having Cherokee (or Jewish, or African) "blood."
> [...]
> And to take a threadbare philosophical example, an atom-for-atom duplicate
> of a human being, an artifactual counterfeit of you, let us say, might not
> legally be you, and hence might not be entitled to your belongings, or
> deserve your punishments, but the suggestion that such a being would not be
> a feeling, conscious, alive person as genuine as any born of woman is
> preposterous nonsense.
Here Dennett already assumes that there is nothing non material inside or
outside the body which effects the consciousness of the being. In fact this
would be a proof if intelligence can be created artificially when this
atom-by-atom duplicate behaved exactly like the original person.
Dennett suggests that after all it is at least for practical reasons that one
should not try to create a 'completely' conscious robot with a level of
intelligence comparable to the average human intelligence, but some sort of
'infant' which is supposed to grow up into consciousness:
> DENNETT:
> If consciousness abhors an artifact, it cannot be because being born gives a
> complex of cells a property (aside from that historic property itself) that
> it could not otherwise have "in principle". [...] it could turn out that any
> conscious robot had to be, if not born, at least the beneficiary of a
> longish period of infancy. Making a fully- equipped conscious adult robot
> might just be too much work. It might be vastly easier to make an initially
> unconscious or nonconscious "infant" robot and let it "grow up" into
> consciousness, more or less the way we all do.
> [...] a certain sort of process is the only practical way of designing
> all the things that need designing in a conscious being.
To explain his point of view Dennett compares the creation of a conscious robot
with the making of a good movie. He claims that though theoretically it would
be possible to create a movie only with computers it is in practice much too
complicated:
> DENNETT:
> [...] the claim one might make
> about the creation of Steven Spielberg's film, Schindler's List: it could
> not have been created entirely by computer animation, without the filming of
> real live actors. This impossibility claim must be false "in principle,"
> since every frame of that film is nothing more than a matrix of gray-scale
> pixels of the sort that computer animation can manifestly create, at any
> level of detail or "realism" you are willing to pay for. There is nothing
> mystical, however, about the claim that it would be practically impossible
> to render the nuances of that film by such a bizarre exercise of technology.
> How much easier it is, practically, to put actors in the relevant
> circumstances, in a concrete simulation of the scenes one wishes to portray,
> and let them, via ensemble activity and re-activity, provide the information
> to the cameras that will then fill in all the pixels in each frame.
By the help of this example we can imagine how complicated it is to create
artificial intelligence, because even the very costly making of a computer
generated movie is still...
> DENNETT:
> [...] many orders of magnitude less complex than a conscious being.
A very simple reason why creating consciousness might be impossible is the
last one Dennett arises:
> DENNETT:
> (4) Robots will always just be much too simple to be conscious.
>
> After all, a normal human being is composed of trillions of parts (if we
> descend to the level of the macromolecules), and many of these rival in
> complexity and design cunning the fanciest artifacts that have ever been
> created. We consist of billions of cells, and a single human cell contains
> within itself complex "machinery" that is still well beyond the artifactual
> powers of engineers. We are composed of thousands of different kinds of
> cells, including thousands of different species of symbiont visitors, some
> of whom might be as important to our consciousness as others are to our
> ability to digest our food! If all that complexity were needed for
> consciousness to exist, then the task of making a single conscious robot
> would dwarf the entire scientific and engineering resources of the planet
> for millennia. And who would pay for it?
This sounds quite reasonable. And of course this is why scientists try to
create a robot with much more simple materials and devices like motors and
TV cameras.
> DENNETT:
> If this is the only reason there won't be
> conscious robots, then consciousness isn't that special, after all.
If this was really the case there would not be anything 'mystic' left about
humans and we might be in danger of considering human beings even less
valuable as we already do sometimes.
Another problem is to tell after which stage of development one could call an
artificial replacement of some part of the human body an 'equivalent' or
'acceptable' replacement:
> DENNETT:
> Nobody ever said a prosthetic eye had to see as
> keenly, or focus as fast, or be as sensitive to color gradations as a normal
> human (or other animal) eye in order to "count" as an eye. If an eye, why
> not an optic nerve (or acceptable substitute thereof), and so forth, all the
> way in?
Even the brain could be substituted in parts in the future if we allow some
loss of functionality:
> DENNETT:
> There is no
> reason at all to believe that some one part of the brain is utterly
> irreplacible by prosthesis, provided we allow that some crudity, some loss
> of function, is to be expected in most substitutions of the simple for the
> complex.
Interestingly Dennett does not bother with the theoretical question how to
create 100% consciousness artificially. Hence he is of the opinion that we
might already be able to learn a lot by building a robot which is able to
communicate and interfere with human beings. This leads us to the 2nd section
of the paper:
> 2. THE COG PROJECT: A HUMANOID ROBOT
> DENNETT:
> A much more interesting tack to explore, in my opinion, is simply to set out
> to make a robot that is theoretically interesting independent of the
> philosophical conundrum about whether it is conscious. Such a robot would
> have to perform a lot of the feats that we have typically associated with
> consciousness in the past [...]. Maybe we could even learn something interesting
> about what the truly hard problems are without ever settling any of the issues
> about consciousness.
>[...]
> Such a project is now underway at MIT. Under the direction of Professors
> Rodney Brooks and Lynn Andrea Stein of the AI Lab, a group of bright,
> hard-working young graduate students are laboring as I speak to create Cog,
> the most humanoid robot yet attempted, and I am happy to be be a part of the
> Cog team.
Now Dennett is providing a description of the outer appearance of the robot:
> DENNETT:
> Cog is just about life-size--that is, about the size of a human
> adult. Cog has no legs, but lives bolted at the hips, you might say, to its
> stand. It has two human-length arms, however, with somewhat simple hands on
> the wrists. It can bend at the waist and swing its torso, and its head moves
> with three degrees of freedom just about the way yours does. It has two
> eyes, each equipped with both a foveal high-resolution vision area and a
> low-resolution wide-angle parafoveal vision area,
Of course the robot's eyes are different to our human eyes:
> DENNETT:
> Cog's eyes won't give it visual information exactly like
> that provided to human vision by human eyes (in fact, of course, it will be
> vastly degraded), but the wager is that this will be plenty to give Cog the
> opportunity to perform impressive feats of hand-eye coordination,
> identification, and search.
I think for a project of this kind it is absolutely appropriate to have this
kind of cameras, especially because human vision does not only have advantages.
For example our eyes get tired very quickly if we look at a non-moving scene
for a while, resulting in funny colours when we focus on another scene afterwards.
Or we see different shades of grey between two adjacent areas even if there are
in fact only two different grey levels.
Although this kind of insufficiencies is absolutely normal for humans we would
not like to have it in a robot necessarily as well.
Nevertheless the engineers take human beings as a model and try to copy specific
functionalities of the human body, for example for safety reasons:
> DENNETT:
> Since its eyes are video cameras mounted on delicate, fast-moving gimbals,
> it might be disastrous if Cog were inadvertently to punch itself in the eye,
> so part of the hard-wiring that must be provided in advance is an "innate"
> if rudimentary "pain" or "alarm" system to serve roughly the same protective
> functions as the reflex eye-blink and pain-avoidance systems hard-wired into
> human infants.
As Dennett already pointed out, it is much easier to create a simple sort of
'intelligence' and then let it grow up, becoming more flexible and versatile:
> DENNETT:
> Cog will not be an adult at first, in spite of its adult size. It is being
> designed to pass through an extended period of artificial infancy, during
> which it will have to learn from experience, experience it will gain in the
> rough-and-tumble environment of the real world. Like a human infant,
> however, it will need a great deal of protection at the outset, in spite of
> the fact that it will be equipped with many of the most crucial
> safety-systems of a living being. It has limit switches, heat sensors,
> current sensors, strain gauges and alarm signals in all the right places to
> prevent it from destroying its many motors and joints.
> DENNETT:
> The goal is that Cog will quickly "learn" to keep its funny bones from being
> bumped--if Cog cannot learn this in short order, it will have to have this
> high-priority policy hard-wired in. The same sensitive membranes will be
> used on its fingertips and elsewhere, and, like human tactile nerves, the
> "meaning" of the signals sent along the attached wires will depend more on
> what the central control system "makes of them" than on their "intrinsic"
> characteristics.
So, even if Cog can not 'feel' real pain, at least it has to be able to
interpret the signals from its sensors situation dependently in different ways.
And by the way: Even we humans sometimes have difficulties to interpret the
'input' in an appropriate way. For example if we feel something very unexpected
we are frightened although it turns out that it was something absolutely
harmless.
> DENNETT:
> Clearly, even if Cog really does have a Lebenswelt, it will not be the same
> as ours.
Cog's 'Lebenswelt' is not only much unsharper and very coarse grained due to
the comparatively simple input sensors, but also different according to the
reason why it considers things as good or bad. It needs to have some sort of a
rule based system either hardwired or achieved later on during the learning
process to judge its environment. We can not tell that Cog really dislikes
something, because if it says so it is not due to a bad feeling but because
it learned to associate this signal with the attribute 'bad'.
> DENNETT:
> How plausible is the hope that Cog can retrace the steps of millions of
> years of evolution in a few months or years of laboratory exploration?
This is another interesting point: The development of intelligent beings like
we are took millions of years so what are the reasons to think that we can
speed up this process? Dennett provides two answers. First:
> DENNETT:
> The acquired design innovations of Cog-I can be immediately transferred to
> Cog-II, a speed-up of evolution of tremendous, if incalculable, magnitude.
The second answer is that unlike in pure natural development Cog is always
supervised by scientists, who can correct errors and teach it how to behave.
Unfortunately, in my opinion this could be a restriction as well, because our
intelligence is also limited and we make lots of mistakes. Maybe by our
supervision we prevent Cog from getting to a degree of intelligence we have
never thought of. But ofcourse in these very early days of robotics and with
the aim of this project it is perfectly fine to have human supervisors
involved.
But all supervision and teaching is of no help if the robot is not willing
to learn:
> DENNETT:
> Growing into an adult is a long, time-consuming business, [...] this will
> not work unless the team manages somehow to give Cog a
> motivational structure that can be at least dimly recognized, responded to,
> and exploited by naive observers. In short, Cog should be as human as
> possible in its wants and fears, likes and dislikes.
> [...]
> It must somehow delight
> in learning, abhor error, strive for novelty, recognize progress. It must be
> vigilant in some regards, curious in others, and deeply unwilling to engage
> in self-destructive activity. While we are at it, we might as well try to
> make it crave human praise and company, and even exhibit a sense of humor.
By implementing all the 'basic' abilities as good as they can the scientists
hope to have a good basis for further development in the direction of more
consciousness:
> DENNETT:
> for the motivating insight for the project is that
> by confronting and solving actual, real time problems of self-protection,
> hand- eye coordination, and interaction with other animate beings, Cog's
> artificers will discover the sufficient conditions for higher cognitive
> functions in general [...].
It is not only a very big aim to create some sort of consciousness but also
to build the body of a robot. Concerning this matter Dennett says:
> DENNETT:
> It is important to recognize that [...] having a body has been appreciated
> ever since
> [...]
> Not [...] because genuine embodiment provides some special vital
> juice that mere virtual- world simulations cannot secrete, but for the more
> practical reason [...] that unless you saddle yourself with all the
> problems of making a concrete agent take care of itself in the real world,
> you will tend to overlook, underestimate, or misconstrue the deepest
> problems of design.
If one does not only simulate a robot one has to deal with problems that
would not have occurred otherwise, especially when materials and devices
interfere with each other:
> DENNETT:
> At this stage of the project, most of the problems being addressed would
> never arise in the realm of pure, disembodied AI. How many separate motors
> might be used for controlling each hand? They will have to be mounted
> somehow on the forearms. Will there then be room to mount the motor boards
> directly on the arms, close to the joints they control, or would they get in
> the way? How much cabling can each arm carry before weariness or clumsiness
> overcome it?
An interesting question arises when Cog finds out some limitations of its
construction or if some problem occurs:
> DENNETT:
> if Cog wants to do some fine-fingered
> manipulation, it will have to learn to "burn" some of the degrees of freedom
> in its arm motion by temporarily bracing its elbows or wrists on a table or
> other convenient landmark,
> [...]
> If Cog's eyes jiggle away from their preset aim, [...] there must be ways
> for Cog to compensate, short
> of trying continually to adjust its camera-eyes with its fingers.
Will Cog be able to find out about these solutions be itself?
Dennett now gives us an example of how the technologies used to create Cog
might interfere with each other:
> DENNETT:
> Earlier I mentioned a reason for using artificial muscles, not motors, to
> control a robot's joints, and the example was not imaginary. Brooks is
> concerned that the sheer noise of Cog's skeletal activities may seriously
> interfere with the attempt to give Cog humanoid hearing. There is research
> underway at the AI Lab to develop synthetic electro-mechanical muscle
> tissues, which would operate silently as well as being more compact, but
> this will not be available for early incarnations of Cog.
Although it is not necessary at the moment, the constructors of Cog design
some parts more human like:
> DENNETT:
> For an entirely
> different reason, thought is being given to the option of designing Cog's
> visual control software as if its eyes were moved by muscles.
> [...]
> Because the "opponent-process" control system exemplified by eye-muscle
> controls is apparently a deep and ubiquitous feature of nervous systems,
> [...]
> If we are going to have such competitive systems at
> higher levels of control, it might be wise to build them in "all the way
> down,"
So one already takes future projects into consideration and does not want to
inhibit future options.
To make sure that the constructors of the robot are always in charge of
there creature, precautions have been in case of an emergency. But even this
is not as easy as it sounds, as Dennett explains:
> DENNETT:
> Other practicalities are more obvious, or at least more immediately
> evocative to the uninitiated. Three huge red "emergency kill" buttons have
> already been provided in Cog's environment, to ensure that if Cog happens to
> engage in some activity that could injure or endanger a human interactor (or
> itself), there is a way of getting it to stop. But what is the appropriate
> response for Cog to make to the KILL button? If power to Cog's motors is
> suddenly shut off, Cog will slump, and its arms will crash down on whatever
> is below them. Is this what we want to happen? Do we want Cog to drop
> whatever it is holding? What should "Stop!" mean to Cog? This is a real
> issue about which there is not yet any consensus.
Now Dennett discusses more principal questions again:
> 3. THREE PHILOSOPHICAL THEMES ADDRESSED
> DENNETT:
> A recent criticism of "strong AI" that has received quite a bit of attention
> is the so-called problem of "symbol grounding" (Harnad, 1990). It is all
> very well for large AI programs to have data structures that purport to
> refer to Chicago, milk, or the person to whom I am now talking, but such
> imaginary reference is not the same as real reference, according to this
> line of criticism. These internal "symbols" are not properly "grounded" in
> the world, and the problems thereby eschewed by pure, non- robotic, AI are
> not trivial or peripheral.
> DENNETT:
> Another claim that has often been advanced, most carefully by Haugeland
> (1985), is that nothing could properly "matter" to an artificial
> intelligence, and mattering (it is claimed) is crucial to consciousness.
> [...]
> Cog will be equipped with some "innate" but
> not at all arbitrary preferences, and hence provided of necessity with the
> concomitant capacity to be "bothered" by the thwarting of those preferences,
> and "pleased" by the furthering of the ends it was innately designed to
> seek. Some may want to retort: "This is not real pleasure or pain, but
> merely a simulacrum." Perhaps, but on what grounds will they defend this
> claim?
These problems are in relation with what I described in the part about what
Dennett called the 'Lebenswelt' of Cog which is different to ours. But Dennett
says:
> DENNETT:
> The reasons for saying that something does
> matter to Cog are not arbitrary; they are exactly parallel to the reasons we
> give for saying that things matter to us and to other creatures.
> DENNETT:
> Finally, J.R. Lucas has raised the claim (at this meeting) that if a robot
> were really conscious, we would have to be prepared to believe it about its
> own internal states. I would like to close by pointing out that this is a
> rather likely reality in the case of Cog. Although equipped with an optimal
> suite of monitoring devices that will reveal the details of its inner
> workings to the observing team, Cog's own pronouncements could very well
> come to be a more trustworthy and informative source of information on what
> was really going on inside it. The information visible on the banks of
> monitors, or gathered by the gigabyte on hard disks, will be at the outset
> almost as hard to interpret, even by Cog's own designers, as the information
> obtainable by such "third- person" methods as MRI and CT scanning in the
> neurosciences.
In my opinion a system which is so complex that it can not be explained even by
its creators is in danger of being out of control after some time. But maybe I
just watched too many sci-fi movies...
The fact that it is very difficult for us to tell what is really going on
inside the robot reminds me again of the Turing Test in conjunction with the
Chinese Room Argument, because we can never tell what this robots 'Lebenswelt'
looks like or if it even has any.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT