From: Bon Mo (bym198@ecs.soton.ac.uk)
Date: Tue Mar 06 2001 - 15:31:36 GMT
Bon Mo
Dennett: The Practical Requirements for Making a Conscious Robot
http://cogsci.soton.ac.uk/~harnad/Papers/Py104/dennett.rob.html
Mo:
The idea of this paper is to make a robot that can interact with human
beings in a robust and versatile manner in real time, take care of itself,
and tell its designers things about itself that would otherwise be
extremely difficult if not impossible to determine by examination.
> DENNETT:
> Might a conscious robot be "just" a stupendous assembly of more
> elementary artifacts--silicon chips, wires, tiny motors and cameras--or
> would any such assembly, of whatever size and sophistication, have
> to leave out some special ingredient that is requisite for consciousness?
Mo:
To the best of my biological knowledge, a human being is completely
coded by DNA, this DNA is passed down hereditary lines, and changed by
mutations, maybe from radiation, maybe from natural selection.
The DNA genes is a map of how a human body structure is formed,
and even holds details of the mutations, such as diseases and virus'
within your body. The point of this is that every single last fibre of us
can be replicated to any degree. At this present date, animal cloning
has been possible, and human clones are becoming reality. A
robot's silicon chips, wires, tiny motors and cameras can all represent
parts of the human anatomy, so you can build a robot to mimic human
functions. A consciousness is different, a human clone can have exactly
the same functionality as its cloned image. Someone's consciousness
is individual to themselves. It is possibly cumulated knowledge, of
experiences and education, but even if someone came from the same
background as you and learnt everything the same as you. Their
thoughts and beliefs may still be different to you. To me someone's
consciousness is the most private and intimate part of them. I assume
it is what makes humans unique. If you saw a consciousness as a
group of rules and facts that one follows, you could assume that
probabilistically there is someone else out there who has the same
consciousness as you. Unless you ever met this person and discussed
openly all your thoughts, you would never know and could not prove
this. Therefore a robot may have its own unique groups of rules and
facts and believe it has a consciousness. We would have to get a
hard copy of the data that is going through the robot's processor and
try to find out why it believes it has a consciousness. Even then we
only know our own thought processes, and cannot judge if someone or
something else has a mind. This will be re-considered later.
> DENNETT:
> I suspect that dualism would never be seriously considered if there
> weren't such a strong undercurrent of desire to protect the mind from
> science, by supposing it composed of a stuff that is in principle
> uninvestigatable by the methods of the physical sciences.
Mo:
The human brain however complex, can now be modelled to accompany
the processes that happen inside it. We understand the neuro-chemical
processes, and the functions within the neurons and synapses. The
brain can be modelled with neural networks, firing of action potentials
and interconnectivity are measured using various values for a 'node' in a
neural net and using adjustable weights (to allow for thresholding).
The localised working sections of the brain that are used for different
functions, such as for walking or speech can be seen visually using
a electro-encephalo graph (EEG). This measures the electrical field
of the brain, and highlights the stimulated sections of the brain.
The more we learn about the functionality of the brain, the more
pretensions are quashed.
> DENNETT:
> So there might be straightforward reasons of engineering that showed
> that any robot that could not make use of organic tissues of one sort or
> another within its fabric would be too ungainly to execute some task
> critical for consciousness.
Mo:
Robots can adapt to their environment using sensors to read input, then
perform some symbol manipulation with the digitised data, and reacting
via mechanical outputs. This can be advantageous over purely organic
systems. Robots can be more durable, each part can be replaced with spares,
which are physically and functionally identical to the original part.
Mechanical parts last longer than organic tissues, but although the human
body cannot regrow whole limbs, ageing does allow for cell regrowth
such as skin cells. Mechanical parts do not require much maintenance,
organic tissues require oxygen, chemical stimulants as well as balanced
temperatures and water levels. Another aspect of mechanical systems
is that digital messages travel faster than neuro-chemical processes, but
at present mechanical rotations are slower than tissue movement. This
speed will increase as mechanical engineering improves. Supposing
that Dennett's robot has a symbolic processor, and organic tissue.
Take for example a human losing a limb, and getting a replacement
artificial limb. The brain still thinks that there is a limb there, so
the brain releases neural chemicals, the artificial limb needs to
register the inputs. At present these mechanical limbs are frustrating
and difficult to use. The digital circuitry of the limb, cannot resolve
the change of electrical, and chemical pulses, and why and how they
affect the limb. This is why I believe, that a robot cannot take any real
advantage of organic tissue, because humans cannot make efficient use of
mechanical aids. So why should a robot be able to make efficient use
of an organic structure?
> DENNETT:
> And to take a threadbare philosophical example, an atom-for-atom duplicate
> of a human being, an artifactual counterfeit of you, let us say, might not
> legally be you, and hence might not be entitled to your belongings, or
> deserve your punishments, but the suggestion that such a being would not
> be a feeling, conscious, alive person as genuine as any born of woman is
> preposterous nonsense.
Mo:
I agree completely with Dennett, a perfect DNA replica of you regardless of
how long a history it has. Should be able to feel and be conscious, regardless
of what it has learnt. Surely a replica with no history, would be mentally
equivalent to a newborn baby. I do not think anyone doubts that a baby
can feel or has its own mind, so surely a replicant would also.
> DENNETT:
> Making a fully-equipped conscious adult robot might just be too much work.
> It might be vastly easier to make an initially unconscious or nonconscious
> "infant" robot and let it "grow up" into consciousness, more or less the
> way we all do.
Mo:
A human baby may have built in survival skills, such as keeping warm and
feeding, but a more interesting realm is the infinite ability to learn
and the motivation required to learn. The majority of babies may not be
capable of doing much at birth, but it can learn from examples. From its
experiences of how things are done, the mistakes it has made and from its
general education. The brain processes and stores these facts and rules,
any scenarios that require any thought be that conscious or unconscious
require making inductive hypothesis, based on the knowledge that the brain
holds. These hypothesis, may or may not be correct, and with continualled
learning of new facts and rules, they may change. This cumulative scaling
up of the brain allows us to gain insights into new areas of knowledge
we previously knew nothing about. This knowledge gathers throughout your
life until the day you die, and so these say 80 years of learning will be
extremely difficult to program into a robot in less time. Another problem
is that we cannot explicitly detail the rules we carry out within our
brain. So perhaps the best alternative would be to design a "infant" robot
with a learning neural network, and let it develop by itself.
> DENNETT:
> There is no reason at all to believe that some one part of the brain is utterly
> irreplacible by prosthesis, provided we allow that some crudity, some
> loss of function, is to be expected in most substitutions of the simple for
> the complex. An artificial brain is, on the face of it, as "possible in principle"
> as an artificial heart, just much, much harder to make and hook up.
Mo:
The human brain is mainly made up of sensors and motors. No figures
exist, but the main majority of the brain is made up of sensory material,
which passes analogue messages from the outer sensory devices, such
as the eye and the inner ear. To inner connections across the grey matter.
Somewhere in the minority of the brain must be an area which can process
the input, so that the body can react to the input. This part must use the store
of knowledge and convert it into something useful. Unfortunately we do
not understand how the brain stores knowledge, or how it carries out
functions. Dennett's suggestion that only part of the brain's functionality is
lost if a prosthetic is to replace it sounds ludicrous, sure we can replicate
the sensors, but all the knowledge and experience is removed when the
brain is replaced. You would be left with a "brain-dead" human, who could
possibly react to outside stimuli, but lose all their cumulated functionality.
> DENNETT:
> Maybe we could even learn something interesting about what the truly
> hard problems are without ever settling any of the issues about
> consciousness.
Mo:
The next section is on a humanoid robot project: Cog. Dennett leaves
the question of consciousness, and concentrates more on designing the
robot. Cog, is adult sized, with moving arms and head, and 2 eyes.
> DENNETT:
> Cog's eyes won't give it visual information exactly like that provided to
> human vision by human eyes (in fact, of course, it will be vastly degraded),
> but the wager is that this will be plenty to give Cog the opportunity to
> perform impressive feats of hand-eye coordination, identification, and
> search. At the outset, Cog will not have colour vision.
Mo:
If Cog does not have colour vision, I assume that it can only decipher
black and white images. This is similar to the rods in the human eye
that gives us scotopic vision. The human eye though is not ideal for
a model of robotic vision. Human eyes have brightness adaptation,
called mach bands, which gives us perception of colours and lines
that are not there. The brain itself is subject to illusion. I would like to know
how much detail Cog can see and how it generates stereoscopic vision.
> DENNETT:
> Since its eyes are video cameras mounted on delicate, fast-moving
> gimbals, it might be disastrous if Cog were inadvertently to punch itself
> in the eye, so part of the hard-wiring that must be provided in advance is
> an "innate" if rudimentary "pain" or "alarm" system to serve roughly the
> same protective functions as the reflex eye-blink and pain-avoidance
> systems hard-wired into human infants.
Mo:
Human infants do not have hard-wired pain avoidance systems. When
they are in pain, they may cry, but they learn more by trail and error.
You don't know something might cause you pain until it happens, infants
are naturally curious about their strange, new environments and will try
to explore all the objects around them. It is up to the adults around them
to try to educate the infants and to restrain them from hurting themselves.
Cog's hardwiring may stop it from ever "hitting" itself, but surely it cannot
learn anything from that. All it is to Cog are built in rules that it must obey.
> DENNET:
> The goal is that Cog will quickly "learn" to keep its funny bones from
> being bumped--if Cog cannot learn this in short order, it will have to have
> this high-priority policy hard-wired in. The same sensitive membranes
> will be used on its fingertips and elsewhere, and, like human tactile
> nerves, the "meaning" of the signals sent along the attached wires will
> depend more on what the central control system "makes of them" than
> on their "intrinsic" characteristics.
Mo:
It suggests to me that a hierarchical system needs to be built, such
as Rodney Brooks' Subsumption architecture. This is where the lower
level functions, do simple tasks such as to move parts and not bump
into things. The higher up the hierarchy the more sophisticated the
functions such as problem solving and survival instincts. The problem
with this is how to assign the connectivity between the levels, and the
priority that each function needs to be assigned.
> DENNETT:
> Although Cog is not specifically intended to demonstrate any particular
> neural net thesis, it should come as no surprise that Cog's nervous
> system is a massively parallel architecture capable of simultaneously
> training up an indefinite number of special-purpose networks or
> circuits, under various regimes.
Mo:
Parallel architectures are required to implement more than one instruction
at a time. The inputs are in parallel , and the computations are done in
parallel, the output is usually the weighted sum of the inputs, and this output
usually forms part of the next input. This allows for a simple algorithm to
approximate to a learning system that uses adjustable weights and
threshold. Usually this architecture implements a teacher that supplies
a set of test data, which has the inputs, and the target output. The weights
in the algorithm are then adjusted after each iteration to approximate
the input to the target output value.
> DENNETT:
> How plausible is the hope that Cog can retrace the steps of millions of
> years of evolution in a few months or years of laboratory exploration?
Mo:
Our genetic make up is from millions of years, and the way humans
have evolved from natural selection has been slow. The reasons
why the human body is such, is based completely on survival purposes.
This includes being able to run after our food and smelling predators
from downwind. Every part of our being is essential, but apart from
our innate survival instincts, all the other parts of our programming,
derive from what we have learnt. Cog is already hard-wired for
its own survival, and so the rest of its knowledge can be learnt in finite
time. Perhaps it is not practical to assume that in fifty years time Cog
could evolve by reproduction, but the real problem is if Cog could
solve "truly hard problems", obviously this is more plausible.
> DENNNET:
> One talent that we have hopes of teaching to Cog is a rudimentary
> capacity for human language.
Mo:
Language is essential for dynamic interaction, the idea of Cog being
able to learn phrases is a problem for neural networks. Modern day
mobile phones have a capacity of storing and processing names for
speech recognition. On a larger scale problem such as with Cog
the same fundamentals of learning algorithms are imposed, the only
change is a larger memory capacity to store and process the phrases.
A language is a complex thing to learn, there are alot of grammar rules
that exist for a start and there are always new words that emerge. The
hardest part though is for the robot to understand what the phrase
actually refers to. If a human went up to Cog and trained it to say
"Goodnight" after the human said the same thing. Cog may realise to
reply on demand, but does it know what the human means. When
someone says "Goodnight" it could be for a number of factors, such as
you are tired and want to go to sleep, or you have finished work and
just being polite before you leave. A phrase can refer to more than one
thing. So a meaning is very complicated it has multiple referents. A
neural net only processes the symbolic elements, even if Cog
constructed a legal sentence, it needs to ground the phrase with
a correct meaning.
> DENNETT:
> In short, Cog should be as human as possible in its wants and fears,
> likes and dislikes. If those anthropomorphic terms strike you as
> unwarranted, put them in scare-quotes or drop them altogether and
> replace them with tedious neologisms of your own choosing: Cog,
> you may prefer to say, must have goal-registrations and
> preference-functions that map in rough isomorphism to human desires.
Mo:
Robots could become advanced enough to mimic physical human
functions, but I am still not convinced about how they would show emotions
an feelings. Perhaps these can be simulated, such as having the robots
eyes go red if it is "angry" or have water leak out of holes to simulate
"sadness". What we humans experience when we go through an emotion
is a chemical change. This imbalance changes our physical appearance,
making our hairs rise, heart pound etc. This biological change also affects
the way our brain works, causing heighten awareness of some parts of the
brain and reducing others. I would draw a line on how these bio-chemical
changes could be used with a mechanical robot, and how the robot could
ever understand the relevance of these minute changes.
> DENNETT:
> Cog stores both its genetic endowment (the virtual machine) and its long
> term memory on disk when it is shut down, but when it is powered on, it
> first configures itself and then stores all its short term memory distributed
> one way or another among its 64 nodes.
Mo:
How is this memory distributed?, does each node hold a specific function,
and how is this collaboration of this information actually work? Does a
node get assigned a priority or weighting on its influence to other nodes?
> DENNETT:
> Anything in Cog that might be a candidate for symbolhood will automatically
> be "grounded" in Cog's real predicament, as surely as its counterpart in any
> child, so the issue doesn't arise, except as a practical problem for the Cog
> team, to be solved or not, as fortune dictates.
Mo:
Dennett claims that the symbol grounding problem does not apply to Cog,
purely because Cog is "equivalent" to a child's brain, and that a child
sometimes cannot ground some things it has been taught. The example
given is for Cog to comment to someone about Chicago. Cog needs to
be able to recognise that Chicago, by itself, is a geographical location,
the events that occur there, the people that live there, and any famous
landmarks, should be referred from Chicago. Harnad's symbol grounding
problem is to know how the referent is derived from just symbols. Does there
need to be more than just symbols for Cog to understand the meaning?
Harnad believes that the symbols need to be grounded by using external
sensori-motor capabilities. Cog has these sensors, and if the information
in the sensors can be referred to the symbolic information, then a grounding
can be made. The problem is defining how this grounding takes place.
> DENNETT:
> if Cog develops to the point where it can conduct what appear to be
> robust and well-controlled conversations in something like a natural
> language, it will certainly be in a position to rival its own monitors (and the
> theorists who interpret them) as a source of knowledge about what it is
> doing and feeling, and why.
Mo:
Dennett concludes that if Cog could converse using a natural language,
then it could also tell us how it is feeling and why. I would like to say that
even if Cog could construct a legal sentence that answers/replies
correctly, there may always be a random element that Cog has not
encountered a word or does not understand what it has been asked. In
this case it must identify a general or "best alternant" answer to the scenario.
It seems to be more than a large step from generalising to actually giving
full interpretations on its own well-being. How is it suppose to support its
own argument other than offering a memory dump of the relevant contents?
Cog must be convincing enough to make you think that it understands
what it is saying and doing, before you can believe in its arguments.
My final thought on this paper is that, humans are constantly changing
their minds as they learn new facts. We believe in our own arguments until
something which better explains part of our knowledge is bought to our
atttention. All the comments I have written are based on cumulation of
knowledge I have learnt, it is fair to say that as I develop so too can Cog
and any of its descendants.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:21 BST