Dennett: Cognitive Science as Reverse Engineering

From: Hunt Catherine (chh398@ecs.soton.ac.uk)
Date: Thu Mar 29 2001 - 21:16:34 BST


Hunt:
Dennet has written this short paper to enlighten us in the subject areas
of top-down and bottom-up methodologies and how they relate to forward and
reverse engineering. Relations between the subject areas are explored and
furthermore, the relationship with cognitive science and their
implications. It is not a particularly controversial paper as a large
portion of it walks the reader through the various meanings associated
with the topic and draws analogies to demonstrate the meanings.

>DENNETT:
>To a first approximation, the terms are used to characterize both research
>methodologies on the one hand, and models (or features of models) on the
>other. I shall be primarily concerned with the issues surrounding top-down
>versus bottom-up methodologies, but we risk confusion with the other
>meaning if we don't pause first to illustrate it, and thereby isolate it
>as a topic for another occasion.

Hunt:
The author starts by tackling the issue that there are two ways of looking
at the terms top-down and bottom-up. They can be viewed both in terms of
models and as methodologies (a system of methods). The paper itself
primarily deals with methodologies, but models are explained early as a
subject that will be dealt with at another time.

>DENNETT:
>...Let's briefly consider, then, the top-down versus bottom-up polarity in
>models of a particular cognitive capacity, language comprehension.

When a person perceives (and comprehends) speech, processes occur in the
brain which must be partly determined bottom-up, by the input and partly
determined top-down, by effects from on high, such as interpretive
dispositions in the perceiver due to the perceiver's particular knowledge
and interests.

Hunt:
My understanding of top-down and bottom-up modeling is that with top-down,
you begin with a starting symbol and apply rules to it until you find it's
meaning. With bottom-up, you begin with a set of symbols, i.e., a
sentence, and break it down into its sub-components until you understand
it. Here, Dennett uses the analogy of language comprehension to
demonstrate the difference. He suggests that when we hear someone speaking
the words are viewed as input into the brain and processed in a bottom-up
fashion - starting with a sentence and breaking it down into its
sub-components. But the knowledge that we have in our brains prior to the
sentence being fed in as input means the sentence is dealt with in a
top-down fashion as well - taking the individual components and applying
rules to them.

I did not think this was a particularly enlightening analogy. Perhaps the
author would like to use two separate analogies to demonstrate the
differences?

>DENNETT:
>There is no controversy, so far as I know, about the need for this dual
>source of determination, but only about their relative importance, and
>when, where, and how the top-down influences are achieved. For instance,
>speech perception cannot be entirely data-driven because not only are the
>brains of those who know no Chinese not driven by Chinese speech in the
>same ways as the brains of those who are native Chinese speakers, but
>also, those who know Chinese but are ignorant of, or bored by, chess-
>talk, have brains that will not respond to Chinese chess-talk in the way
>the brains of Chinese-speaking chess-mavens are. This is true even at the
>level of perception: what you hear--and not just whether you notice
>ambiguities, are and susceptible to garden- path parsings, for
>instance--is in some measure a function of what sorts of expectations you
>are equipped to have.

Hunt:
Here, Dennett explains that speech perception cannot be entirely
data-driven, and to back-up his claim he points out that our brains are
equipped to deal with speech recognition, however this does not
automatically mean that we can understand all speech. If we can speak only
English, we cannot understand Chinese. He further demonstrates that if we
speak English but are not interested in football, and someone tries to
talk to us about football, then we will understand the vocabulary, but not
necessarily the content, i.e. I do not understand a great deal about
football, and if someone talks to me about it, I can understand the words
they use, but I do not understand for instance, the "off-side" rule that
might come up in conversation. I understand 'off' and 'side', but do not
understand the combination of the two. The combinations change the context
of the words.

>DENNETT:
>...Alexander's comprehension machinery was apparently set with too strong
>a top-down component (though in fact he apparently perceived the stimulus
>just fine).

Hunt:
Dennet went on to describe a situation were someone is trying to
communicate with a deaf man through an ear trumpet. The deaf man can
obviously hear what is said, but thinks he has heard wrong. My
understanding of Dennett's meaning behind the story is that because there
is too much rule applying going on in the deaf man's brain, the input is
over analysed. This is why a conclusion of a too strong a top-down
methodology going on in the man's brain is drawn. I may have misunderstood
this and I am open to suggestion of other interpretations.

>DENNETT:
>An AI speech-understanding system whose development was funded by DARPA
>(Defense Advanced Research Projects Agency), was being given its debut
>before the Pentagon brass at Carnegie Mellon University some years ago. To
>show off the capabilities of the system, it had been attached as the
>"front end" or "user interface" on a chess-playing program. The general
>was to play white, and it was explained to him that he should simply tell
>the computer what move he wanted to make. The general stepped up to the
>mike and cleared his throat--which the computer immediately interpreted as
>"Pawn to King-4." Again, too much top-down, not enough bottom-up.

Hunt:
Here Dennett is further demonstrating too much top-down analysis by the
computer this time. I am unsure why he has used this analogy and found
myself confused as to what he was trying to convey to the reader. Surely
the comparison of a human brain and a computer that has been designed to
carry out speech recognition are two different things entirely? I am
assuming that Dennett means that the computer is taking each individual
piece of input and trying to apply rules to them and not looking at the
whole input together in a bottom-up sense. By breaking the sentences up
completely, the meaning is taken away which is why he draws the conclusion
of too much top-down?

>DENNETT:
>This methodology is a straightforward application of standard
>("forward") engineering to the goal of creating artificial
>intelligences. This is how one designs and builds a clock, a water pump,
>or a bicycle, and so it is also how one should design and build a
>robot. The client or customer, if you like, describes the sought for
>object, and the client is the boss, who sets in motion a top-down
>process. This top-down design process is not simply a one-way street,
>however, with hierarchical delegation of unrevisable orders to subordinate
>teams of designers. It is understood that as subordinates attempt to solve
>the design problems they have been given, they are likely to find good
>reasons for recommending revisions in their own tasks, by uncovering
>heretofore unrecognized opportunities for savings, novel methods of
>simplifying or uniting subtasks, and the like. One expects the process to
>gravitate towards better and better designs, with not even the highest
>level of specification immune to revision. (The client said he wanted a
>solar-powered elevator, but has been persuaded, eventually, that a
>wind-powered escalator better fits his needs.)

Hunt:
Dennett carries on to say that top-down methodologies are the same as
forward engineering in regards to artificial intelligence. He states that
such tasks as building a clock or a bike can be seen as forward
engineering. I made the assumption that building a clock or a bike was a
case of reverse engineering - we already have the technology and design
there, so to build a new one we use the design and fathom out how it works
so that we can remake it in a different way and more effective way. I
think that Dennett makes a contradiction in his assertions, as later on in
the paper he describes the re-engineering of a product that has already
been designed, as reverse-engineering. I think the idea of making a clock
or a bike as forward engineering is purely theoretical, but not
necessarily practical in reality - if I made a clock, I would see how
someone else had done it and this is what Dennett and I would describe as
reverse engineering.

>DENNETT:
>Reverse engineering is just what the term implies: the interpretation of
>an already existing artifact by an analysis of the design considerations
>that must have governed its creation.

Hunt:
It is here that Dennett makes the contradiction. If I wanted to make a
clock I would make an interpretation of an already existing artifact by an
analysis of the design considerations that must have governed its
creation.

>DENNETT
>There is a phenomenon analogous to convergent evolution in
>engineering: entirely independent design teams come up with virtually the
>same solution to a design problem. This is not surprising, and is even
>highly predictable, the more constraints there are, the better specified
>the task is. Ask five different design teams to design a wooden bridge to
>span a particular gorge and capable of bearing a particular maximum load,
>and it is to be expected that the independently conceived designs will be
>very similar: the efficient ways of exploiting the strengths and
>weaknesses of wood are well-known and limited.

Hunt:
Here, Dennett shows a classic example of reverse engineering that exploits
what we already know. Time and time again we have explored how something
has been made or put together and exploited these facts to make a new and
better version. The more this process is carried out, the more alike the
designs become.

>DENNETT:
>But when different engineering teams must design the same sort of thing a
>more usual tactic is to borrow from each other. When Raytheon wants to
>make an electronic widget to compete with General Electric's widget, they
>buy several of GE's widget, and proceed to analyze them: that's reverse
>engineering. They run them, benchmark them, x-ray them, take them apart,
>and subject every part of them to interpretive analysis: why did GE make
>these wires so heavy? What are these extra ROM registers for? Is this a
>double layer of insulation, and if so, why did they bother with it? Notice
>that the reigning assumption is that all these "why" questions have
>answers. Everything has a raison d'etre; GE did nothing in vain.

Hunt:
Again, another classic example of reverse engineering, which furthers my
claim that Dennett has contradicted himself.

>DENNETT:
>What Marr, Newell, and I (along with just about everyone in AI) have long
>assumed is that this method of reverse engineering was the right way to do
>cognitive science. Whether you consider AI to be forward engineering (just
>build me a robot, however you want) or reverse engineering (prove, through
>building, that you have figured out how the human mechanism works), the
>same principles apply.

Hunt:
The author explains that artificial intelligence can be seen as both
forward and reverse engineering, but as I understand from what he has
written, he believes that the correct way to do cognitive science is
through reverse engineering. I agree with his view. I think that you have
to understand what or who you are modeling to engineer it.

>DENNETT:
>A cautious version of this assumption would be to note that the judicious
>application of reverse engineering to artifacts already invokes the
>appreciation of historical accident, sub- optimal jury-rigs, and the like,
>so there is no reason why the same techniques, applied to organisms and
>their subsystems, shouldn't yield a sound understanding of their
>design. And literally thousands of examples of successful application of
>the techniques of reverse engineering to biology could be cited. Some
>would go so far (I am one of them) as to state that what biology is, is
>the reverse engineering of natural systems. That is what makes it the
>special science that it is and distinguishes it from the other physical
>sciences,

Hunt:
An interesting theory from Dennett, he suggests that Biology is the
reverse engineering of natural systems. I think that the statement is very
strong. What about the other aspects of the subject - the study of plant
life, or the human body? I do not remember studying the reverse
engineering of natural systems in biology class at school, and it is for
this reason that I do not agree with his assertion, although I think that
it has its place within the subject. I do not follow why Dennett is trying
to make the distinction between biology and the other sciences. Chemistry
is the study of chemicals and the reactions between them (although this is
a rather simplified view), and to a certain extent reverse engineering
could be carried out in the sphere of the subject. So what separates the
two?

>DENNETT:
>But if this is so, we must still take note of several further problems
>that make the reverse engineering of natural systems substantially more
>difficult than the reverse engineering of artifacts, unless we supplement
>it with a signficantly different methodology, which might be called
>bottom- up reverse engineering--or, as its proponents prefer to call
>it: Artificial Life. The Artificial Life movement (AL), inaugurated a few
>years ago with a conference at Los Alamos (Langton, 1989), exhibits the
>same early enthusiasm (and silly overenthusiasm) that accompanied the
>birth of AI in the early 60's.

Hunt:
Dennett suggests that there is a different methodology needed for the
reverse engineering of natural systems than that used in the reverse
engineering of anything else. He carries on to explain about the birth of
an organisation - AL - and states that they show "silly
over-enthusiasm" for the subject, but he does not explain why. I would be
interested to hear him qualify his statement.

>DENNETT:
>...In my opinion, it promises to deliver even more insight than AI. The
>definitive difference between AI and AL is, I think, the role of bottom-up
>thinking in the latter. Let me explain.
>
>A typical AL project explores the large scale and long range effects of
>the interaction between many small scale elements (perhaps all alike,
>perhaps populations of different types). One starts with a specification
>of the little bits, and tries to move towards a description of the
>behavior of the larger ensembles. Familiar instances that predate the
>official Artificial Life title are John Horton Conway's game of Life and
>other cellular automata, and, of course, connectionist models of networks,
>neural and otherwise. It is important to realize that connectionist models
>are just one family within the larger order of AL models.

Hunt:
Dennett states that there is difference between AI an AL and the
difference being the long-term relationships between small elements that
make up the entity. I cannot disagree with this at all, and although it is
rather uninteresting to agree, I would say that Dennett echoes my
arguments about modeling the human mind. I think that it is one thing to
talk about the modeling of a person or the human mind, but another to
actually do it. It is this argument that I am particularly interested
in. I believe that the mind is made up of lots of tiny pieces of
information that are unique to an individual, and it is the relationships
between these pieces over a period of time that makes the person or the
mind. I particularly like the analogy that Dennett refers to later in the
paper - Plato's aviary of knowledge that likens birds in a aviary to human
memory; how do you get the right bird to come when you call? How would a
computer simulation know what is the correct action?

>DENNETT:
>When human engineers design something (forward engineering), they must
>guard against a notorious problem: unforeseen side effects. When two or
>more systems, well-designed in isolation, are put into a supersystem, this
>often produces interactions that were not only not part of the intended
>design, but positively harmful; the activity of one system inadvertently
>clobbers the activity of the other. By their very nature unforseeable by
>those whose gaze is perforce myopically restricted to the subsystem being
>designed, the only practical way to guard against unforeseen side effects
>is to design the subsystems to have relatively impenetrable boundaries
>that coincide with the epistemic boundaries of their creators. In short,
>you attempt to insulate the subsystems from each other, and insist on an
>overall design in which each subsystem has a single, well-defined function
>within the whole. The set of systems having this fundamental abstract
>architecture is vast and interesting, of course, but--and here is AL's
>most persuasive theme--it does not include very many of the systems
>designed by natural selection! The process of evolution is notoriously
>lacking in all foresight; having no foresight, unforeseen or unforeseeable
>side effects are nothing to it; it proceeds, unlike human engineers, via
>the profligate process of creating vast numbers of relatively uninsulated
>designs, most of which, of course, are hopelessly flawed because of
>self-defeating side effects, but a few of which, by dumb luck, are spared
>that ignominious fate. Moreover, this apparently inefficient design
>philosophy carries a tremendous bonus that is relatively unavailable to
>the more efficient, top-down process of human engineers: thanks to its
>having no bias against unexamined side effects, it can take advantage of
>the very rare cases where beneficial serendipitous side effects
>emerge. Sometimes, that is, designs emerge in which systems interact to
>produce more than was aimed at. In particular (but not exclusively) one
>gets elements in such systems that have multiple functions.

Hunt:
Dennett seems to like long sentences - perhaps he could break the text up
to make it easier to read? Here the author describes how the engineering
of something has to ensure that there are no side effects to them. In the
case of two or more sub-systems that are requires to work together, they
may have worked with no problems independantly, but put them together and
the interactions bwtween them might not produce desired behaviour. He
suggests that the sub-systems need to be insulated from each other. He
further suggests that with natural systems that are produced through
evolution have no foresight and as it has no foresight it just carries on
with no regard to the problems that may arise. He makes the comparrison
between evolution and human engineers who stop when they spot flaws in
their designs. He points out that evolution makes use of the rare
occasions when there are side effects that are desirable that human
engineers do not some close to as they have stopped designing way before
they get to this point.

Hunt:
I actually agreed with the majority of Dennett's assertions although it
would be more interesting to disagree and argue with the points that he
raised. However, I believe that arguing for the sake of arguing is a
pointless exercise, thus I am making no further comments at this time.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST