Re: Dennett: Cognitive Science as Reverse Engineering

From: Cattell Christopher (
Date: Tue Apr 24 2001 - 20:25:10 BST

This paper was written by Daniel C. Dennett and attempts to distinguish
between two methods of engineering - "Top-Down" and "Bottom Up". Denett starts
by distinguishing between design methodologies and models. He then explains
the difference between top-down and bottom up methodologies and how systems
can be designed utilising these methodologies. Next, Dennett concentrates
on reverse engineering - what it is and what assumptions are needed for
it. Dennett then talks about artificial life and how it differs from
artificial intelligence. Dennett finishes off with the problem of "distal
access" which is how the central system reaches out to the right part of
the memory at the right time.

> When a person perceives (and comprehends) speech, processes occur in the
> brain which must be partly determined bottom-up, by the input and partly
> determined top-down, by effects from on high, such as interpretive
> dispositions in the perceiver due to the perceiver's particular
> knowledge and interests.

Dennett starts by differentiating from design methodologies and
models. Here he uses language comprehension as an example. Dennett explains
that this comprehension must have both bottom-up and top-down elements to it.
The bottom-up part is the actual input you receive. This is the process of
hearing the words, top-down is the process of interpreting them as a meaning.
Dennett then explains how this top-down process can be too top-down (i.e.
there is too much information in the top-down process). He also says, and I
agree, that there is no controversy about the need for a duel source of
determination, but only about their importance. Dennett uses this following
example of how too much top-down and not enough bottom-up information can
hinder, rather than help, perception.

> The philosopher Samuel Alexander, was hard of hearing in his old age, and
> used an ear trumptet. One day a colleague came up to him in the common
> room at Manchester University, and attempt to introduce a visiting
> American philosopher to him. "THIS IS PROFESSOR JONES, FROM AMERICA!" he
> bellowed into the ear trumpet. "Yes, Yes, Jones, from America" echoed
> Alexander, smiling. "HE'S A PROFESSOR OF BUSINESS ETHICS!" continued
> the colleague. "What?" replied Alexander. "BUSINESS ETHICS!" "What?
> Professor of what?" "PROFESSOR OF BUSINESS ETHICS!" Alexander shook
> his head and gave up: "Sorry. I can't get it. Sounds just like 'business
> ethics'!"

Dennett also explains that too much top-down might not be too bad, in fact
it might help to improve the input. In some systems (i.e. a speech
recognition system) where there isn't enough top-down, the system may be too
specific and not have enough variation to distinguish the differences in
peoples voices. This shows that the amount of top-down has to be just right.

> In these contexts, "top-down" refers to a contribution from "on
> high"--from the central, topmost information stores--to what is coming
> "up" from the transducers or sense organs

Here Dennett says that top-down uses information that has already been
gathered "up high" on information that is coming "up" from "down
below". Dennett then introduces a view on purely bottom-up processes.

> The issue is complicated by the fact that the way in which Marr's model
> (and subsequent Marr- inspired models) squeeze so much out of the data
> is in part a matter of fixed or "innate" biases that amount to
> presuppositions of the machinery...Is the rigidity assumption tacitly
> embodied in the hardware a top-down contribution? If it were an optional
> hypothesis tendered for the nonce by the individual perceiver, it would
> be a paradigmatic top-down influence. But since it is a fixed design
> feature of the machinery, no actual transmission of "descending" effects
> occurs; the flow of information is all in one inward or upward
> direction.

Here Dennett asks the question whether if an assumption "built in" to
machinery is "top-down". Here Dennett explains, and I agree, that it is
not. The reason for this is that the no flow of information from above
only from below or in the inward direction.

> It is hopeless, Marr argued, to try to build cognitive science models
> from the bottom-up: by first modeling the action of neurons (or synapses
> or the molecular chemistry of neurotransmitter production), and then
> modeling the action of cell assemblies, and then tracts, and then whole
> systems (the visual cortex, the hippocampal system, the reticular system).
> You won't be able to see the woods for the trees

Dennett explains Marr's theory of a three level system. First the
computational side would be specified; once this has been successfully
completed you progress down to the next level and use this to specify the
algorithm for implementing it. Similarly, when this algorithm has been
specified you can then move down to the next level, which is physically
implementing the algorithm. I believe that this is a bit too ambiguous, as
I don't think that everything could be implemented in three clear levels as
Marr suggests. Dennett then explains why the majority of research into AI
is addressed to issues formulated in the top-down manner. The reason
Dennett gives is that because AI is primarily an engineering discipline,
and engineers work in a top-down manner, then research into AI should also
be carried out in this manner using engineering principles. Dennett also
explains how the standard "forward" engineering methodology is not just a
"one-way street" and throughout the design process things are likely to change:

> (The client said he wanted a solar-powered elevator, but has been
> persuaded, eventually, that a wind-powered escalator better fits his
> needs.)

Dennett mentions the distinction that Allen Newell makes between, what he
calls, the knowledge level and the physical symbol level. This
distinction is the same as Marr, and insists that the designer takes in
to consideration the temporal and special constraints on architectures
when working on the algorithmic level. Dennett states 3 things that
himself, Marr, and Newell all have in common:

> 1. stress on being able (in principle) to specify the function computed
> (the knowledge level or intentional level) independently of the other
> levels. 2. an optimistic assumption of a specific sort of
> functionalism: one that presupposes that the concept of the function of
> a particular cognitive system or subsystem can be specified. (It is the
> function which is to be optimally implemented.)3. A willingness to view
> psychology or cognitive science as reverse engineering in a rather
> straightforward way.

Dennett then explains about reverse engineering - the interpretation of an
already existing artifact by an analysis of design considerations that
must have governed its creation. In this section Dennett moves on to
reverse engineering. Reverse engineering is done by taking something
apart, analysing it, and improve/rebuild it using what you have learned
form the analysis. Reverse engineering is often used when a company
wishes to "copy" a product but doesn't want to blatantly infringe copyrights.

> Of course if the wisdom of the reverse engineers includes healthy
> helping of self-knowledge, they will recognize that this default
> assumption of optimality is too strong: sometimes engineers put stupid,
> pointless things in their designs, sometimes they forget to remove
> things that no longer have a function, sometimes they overlook
> retrospectively obvious shortcuts.

This goes back to the earlier idea to there being too much information in
the top-down level. I believe that the best way to reverse engineer is to
have as little prior knowledge of a system as possible, but enough
knowledge to be able to understand it and reverse engineer it. This way,
the best "copy" will be achieved, as the engineers will have little
knowledge of the system that could interfere with their understanding of
it. However, I also believe that it is very difficult, if impossible, to
be able to reverse engineer a system without ANY knowledge of it. Dennett
also states that he and almost everybody in AI have assumed that reverse
engineering is the correct way to do cognitive science. I agree with this
view as I think the only way to study how to fully reproduce the
mind (to the extent possible) is to find out how it works at the lowest
level possible. Dennett then talks about an oversight in Marr's top-down
vision by over-idealising the design problem.

> in spite of the difference in the design processes, reverse engineering
> is just as applicable a methodology to systems designed by Nature, as to
> systems designed by engineers. Their presupposition, in other words,
> has been that even though the forward processes have been different,
> the products are of the same sort, so that the reverse process of
> functional analysis should work as well on both sorts of product

Here Dennett states that it is right to use reverse engineering in systems
designed by nature just as much as in engineering systems. His reasoning
is that it doesn't matter how you got to the system, at the end of the day,
whether they are created by man or nature, they are of the same
sort. Next, Dennett explains what Artificial Life (AL) is and how it
differs from Artificial Intelligence.

> Denett:
> A typical AL project explores the large scale and long range effects of
> the interaction between many small scale elements (perhaps all alike,
> perhaps populations of different types). One starts with a
> specification of the little bits, and tries to move towards a
> description of the behavior of the larger ensembles.

Dennett clearly explains what is involved in a typical AL project and
later states, from a book by neuroscientist Valentino Braitenberg, that it is
much easier to deduce the behaviour of a system whose internal machinery
you have made, rather than the make the internal machinery of a system
you have observed. I agree with this up to a point but the two are very
different so a comparison, I think, is not that helpful. The reason is
that for a system that when you are building the internal machinery for a
system, you are unlikely to know exactly what the outcome will be, but
working the other way you know what the result (or the result you
want) is going to be (If you get the building of the internal machinery
right). Dennett then goes on to talk about how designers must
guard against unforeseen side effects when designing something (forward
engineering). This concerns two or more independently designed systems
that, when integrated into a supersystem, cause unintended side effects with
each other.

> In short, you attempt to insulate the subsystems from each other, and
> insist on an overall design in which each subsystem has a single,
> well-defined function within the whole. The set of systems having this
> fundamental abstract architecture is vast and interesting, of course,
> but--and here is AL's most persuasive theme--it does not include very
> many of the systems designed by natural selection!

Here Dennett explains that the process of evolution does not isolate the
designs in the same way as a top-down engineering approach. However this
is not necessarily a problem as you can end up with elements of the
system with multiple functions.

> In biology, one encounters quite crisp anatomical isolation of functions
> (the kidney is entirely distinct from the heart, nerves and blood
> vessels are separate conduits strung through the body), and without
> this readily discernible isolation, reverse engineering in biology would
> no doubt be humanly impossible, but one also sees superimposition of
> functions that apparently goes "all the way down".

Dennett states that without the isolation of different anatomical parts,
the reverse engineering of Biology will be impossible. This is also the same
for designing cognitive systems which means top-down reverse engineering is
unlikely to be able to encounter the right design issues. Dennett
explains how AL overcomes this by opening up different regions of design
space. Lastly Dennett explains the problem of "Distal access":

> A standard feature of models of cognitive systems or thinkers or
> planners is the separation between a central "workspace" or "working
> memory" and a long term memory. Materials are brought to the workspace
> to be considered, transformed, compared, incorporated into larger
> elements, etc. This creates what Newell has called the problem of
> "distal access". How does the central system reach out into the memory
> and find the right elements at the right time?

> But nothing we know in functional neuroanatomy suggests anything like
> this division into separate workspace and memory. On the contrary, the
> sort of crude evidence we now have about activity in the cerebral cortex
> suggests that the very same tissues that are responsible for long term
> memory, thanks to relatively permanent adjustments of the connections,
> are also responsible, thanks to relatively fleeting relationships that
> are set up, for the transient representations that must be involved in
> perception and "thought"

Dennett isolates two parts of your memory, basically short term (the
workspace) and long term, and how short term is retrieved by the central
system. However, research indicates that it is the same tissues that are
responsible for both. This, Dennett explains, is similar to the
multi-functioning elements mentioned before. Dennett also explains
another theory that is worth investigating, that it is a mistake to
decompose the function and machines with entirely different
joints can make the same effect. However, the only way to explore this
would be to use bottom up reverse engineering and is not even worth
exploring top down reverse engineering.

To conclude, I think that this paper is quite good. It gives a solid
overview of the differences between top down and bottom up reverse
engineering. The only criticism with it is the fact that it doesn't have
many new theories and is merely an overview. It is easy to read and
understand and after reading it, it has given me a much better idea of
what circumstances you should use top down and bottom up reverse

This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST