From: Henderson Ian (irh196@ecs.soton.ac.uk)
Date: Thu May 24 2001 - 13:09:19 BST
In reply to: Shynn Chris: "Re: Dennett: Making Conscious Robots"
>> DENNETT:
>> part of the hard-wiring that must be provided in advance is an
>> "innate" if rudimentary "pain" or "alarm" system to serve roughly the
>> same protective functions as the reflex eye-blink and pain-avoidance
>> systems hard-wired into human infants.
> Shynn:
> I also agree with with Dennett here, but only to a point.
> Yes human infants have reflexes and pain-avoidance systems build
> in but they learn by trial and error as to what will set off
> those alarms. Infants are not in my opinion a good model for this
> as, though they have the systems, they are almost unable to make
> use of them as they have not yet built up an idea of what will
> set them off. I believe that this could also be the case with
> Cog, in early stages while he is still learning what will set off
> the alarms he may damage much equipment in that learning.
Henderson:
As Shynn points out, Cog will learn through making mistakes, and
these mistakes may take on a physical form that leads to it
damaging itself. However, this eventuality should not be avoided
by Cog's experimenters: children learn physical coordination by
falling over, bumping into things, and hurting themselves. Cog
will not be able to interact with his environment effectively
unless it can learn to coordinate itself physically (how can it
pick up and play with objects unless it has learnt the rudiments of
physical coordination?), and thus such coordination may be a necessary
prerequisite for symbol grounding to occur.
>> DENNETT:
>> The goal is that Cog will quickly "learn" to keep its funny bones from
>> being bumped--if Cog cannot learn this in short order, it will have to
>> have this high-priority policy hard-wired in.The same sensitive
>> membranes will be used on its fingertips and elsewhere, and, like human
>> tactile nerves, the "meaning" of the signals sent along the attached
>> wires will depend more on what the central control system "makes of
>> them" than on their "intrinsic" characteristics. A gentle touch,
>> signalling sought- for contact with an object to be grasped, will not
>> differ, as an information packet, from a sharp pain, signalling a need
>> for rapid countermeasures.
> Shynn:
> Here Dennet puts foward the idea of modelling the lower
> level functions of the human brain in Cog. These reactions to the
> data-packets sent by the membrane will be essential learning
> material and it would be interesting to see how Cog will
> differentiate between a light touch upon an object and a touch
> that would break something as according to Dennett they will be
> exactly the same data-wise.
Henderson:
I'm not sure this is what Dennett is saying. All he means is that
a prick from a needle and the gentle touch of an object will both
result in packets of information being sent to the brain. These
packets have no intrinsic semantics to them, and must be interpreted
by Cog's processors as 'pain' or whatever. I do not think he is
saying that the data contained within those packets will be exactly
identical bit-for-bit. For instance the needle prick may result in a
data packet which contains a zero pressure reading for all areas of
the fingertip except for the part being pricked by the needle, which
will be assigned a large pressure reading; conversely a gentle touch
may be encoded as smaller pressure coefficients uniformly distributed across
all the sensors of the fingertip. I would expect the sensory membrane of the
fingertips to be able to measure heat and moisture as
well as pressure.
>> DENNETT:
>> We are going to try to get Cog to build language the hard way, the way
>> our ancestors must have done, over thousands of generations. Cog has
>> ears (four, because it's easier to get good localization with four
>> microphones than with carefully shaped ears like ours!) and some
>> special-purpose signal-analyzing software is being developed to give Cog
>> a fairly good chance of discriminating human speech sounds, and probably
>> the capacity to distinguish different human voices.
> Shynn:
> Here Dennett describes what will be designed into Cog to
> aid him in picking up the human language, or at least some
> rudimentary form of language. But I believe that just to give him
> this equipment will not be enough, I think that Cog will also
> need to have a sort of desire to imitate its 'mothers' like human
> infants do when they start to speak for the first time. I think
> that if Cog does have this desire and also the desire to
> understand what it is imitating then he will be able to pick up
> language much as a human infant does. But it is the understanding
> of the language it is learning that is the most important,
> otherwise Cog becomes just a parrot who has a basic use of
> language but no understanding of what that language refers to.
Henderson:
This is where grounding comes in of course. This is the reason why
Cog has been equipped with eyes, ears, arms and hands -- so it can
see, hear and touch objects around it. It can then learn through supervision
to associate symbols with those objects, and hopefully
in consequence understand what they mean.
>> DENNETT:
>> if Cog develops to the point where it can conduct what appear to be
>> robust and well-controlled conversations in something like a natural
>> language, it will certainly be in a position to rival its own monitors
>> (and the theorists who interpret them) as a source of knowledge about
>> what it is doing and feeling, and why.
> Shynn:
> Here is where Dennett concludes saying that since Cog is
> designed to re-design himself as much as possible then it will
> eventually be Cog who will be the expert on what is happening to
> Cog. If he manages to reach a stage where he is able to converse
> with his designers he will have redesigned himself so much then
> only he will be able to tell the designers what is happening in
> him, as they although experts will not have the current
> operational knowledge of Cog at any one time.
Henderson:
But even if Cog is able to converse fluently with his designers, will he be
able to explain his internal workings to them as they desire? If humans had
this capability, we would have been able to explain away the workings of our
brains long ago! He may be able to provide some evidence about how he does
things, but this evidence is likely to be incomplete, and is empirical by
nature. This is a major problem for the designers: robotics is being used
here as a technique to help reverse engineer intelligence, and if it is to
help us in this aim we somehow need to get 'inside' the robot to see *how*
it is doing what it is doing. This is not easy: Cog is not an implementation
independent symbol system, so we can't just climb inside, see through its
'eyes', and carry out its computations for ourselves. We can collect much
evidence in the form of monitors showing what Cog is seeing, readouts
ofdata, and even words from the mouth of Cog itself, but this does not mean
that the experimenters will necessarily understand *how* Cog is performs
certain tasks at the end of the day. If this is the case, then what have we
learnt about how humans carry out these tasks? This is one of the gambles
the MIT team are taking with the Cog project -- that Cog will prove
sufficiently transparent to offer insight into the nature of human
intelligence, or at least the nature of the problems faced in scaling up to
it. At least, unlike the human brain, they will fully understand the
hardware components from which Cog is made, and this knowledge, coupled with
empirical data from Cog itself, should hopefully help them decipher Cog's
'thought' processes.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST