Re: Dennett: Cognitive Science as Reverse Engineering

From: Clark Graham (ggc198@ecs.soton.ac.uk)
Date: Sat Apr 07 2001 - 13:14:32 BST


Clark:
Dennett's paper, 'Cognitive Science as Reverse Engineering: Several
Meanings of "Top-Down" and "Bottom-Up"' sets out to do just what it
says: to try and disambiguate the terms, and look at their role in
cognitive science.

Before looking at the paper, several terms must be defined.

Reverse Engineering:
        Something needs to be built, and we already have a working
        version. The trouble is, we don't know how it works, so we can't
        build any more of it. Therefore, we must take apart the something
        piece by piece in order to find out how it works, and then go
        about building another.

Forward Engineering:
        Something needs to be built, and we have access to a whole range
        of possible components, of which we know everything (or a lot)
        about. From these, we can work out the best way to put these
        together in order to make the required something.

Top-Down:
        The big picture is looked at first, and then successively smaller
        "layers" are considered. This is analogous to deduction. For
        example, when designing a computer program to simulate cars racing
        around a track, a top-down approach would consider the track
        first, and how cars should behave at each stage of it. The cars
        themselves would then be designed, adhering to the previously
        defined rules. Smaller features would then be considered, for
        example braking, the locking up of wheels, or damage to the car.

Bottom-Up:
        The opposite of top-down, and analogous to induction. Small
        components are started with, and these are pieced together in
        order to make a larger "thing". With the race car example, a
        bottom-up approach would first consider Newton's laws of physics,
        and design cars (or components of cars) that obeyed them. When it
        came to driving them around a track, they would already "know" how
        to behave at each position on the road.

> DENNETT:
> ...speech perception cannot be entirely data-driven because not only
> are the brains of those who know no Chinese not driven by Chinese
> speech in the same ways as the brains of those who are native
> Chinese speakers, but also, those who know Chinese but are ignorant
> of, or bored by, chess-talk, have brains that will not respond to
> Chinese chess-talk in the way the brains of Chinese-speaking chess-
> mavens are.

Clark:
To say speech perception is data-driven is a top-down approach:
obviously certain speech (eg. Chinese or English) means different
things to different people; this is because their brains are "wired
up" differently so that some can recognise and understand Chinese, and
some cannot. Looking at the neurons first and then gradually working
up to decide whether or not a brain could understand Chinese would be
the bottom-up approach to this situation. Dennett then gives some more
examples of top-down approaches that fail, of which this is the most
interesting:

> DENNETT:
> An AI speech-understanding system whose development was funded by
> DARPA (Defense Advanced Research Projects Agency), was being given
> its debut before the Pentagon brass at Carnegie Mellon University
> some years ago. To show off the capabilities of the system, it had
> been attached as the "front end" or "user interface" on a chess-
> playing program. The general was to play white, and it was explained
> to him that he should simply tell the computer what move he wanted
> to make. The general stepped up to the mike and cleared his throat--
> which the computer immediately interpreted as "Pawn to King-4."

Clark:
In this case, the computer took the speech input and tried to match it
against stored examples, to see which one fit best. If a bottom-up
approach was used, the input would have first been processed into some
form the program could understand (ie. by splitting the speech up into
known words), then the program would have tried to figure out what was
being said, attach some meaning to it, and act accordingly. The "known
words" would be stored as descriptions of the words, rather than a
bank of waveforms to match the input to.

This illustrates that bottom-up approaches generally take a lot more
work. It is comparatively easy to match waveforms against each other
for similarities than it is to teach a program to understand human
speech. However, this bottom-up approach is much more versatile. If a
program could understand speech, then all that would be required to
port it to a different problem domain would be the programming of some
different actions (ie move a robot instead of a chess piece on a
screen). In contrast, the top-down program would have to be more or
less written again from scratch if a similar program was needed in,
say, an operating theatre. A whole new bank of waveforms (stored
speech patterns) would have to be constructed, and the sensitivity of
the program would also probably have to be altered (the language of
medicine is much more complicated and open to misinterpretation than
the that of the military).

A problem for cognitive science, though (as I see it), is that humans
use both approaches to speech recognition. In the main, we use a
bottom-up approach - we understand our language, and we take in words
people are saying, and bit by bit assign meaning to sentences.
However, if an unfamiliar word is said, or a sentence said with an
unfamiliar accent, or we find it hard to hear someone due to
background noise or distance, or even if we hear a phrase we are
extremely familiar with, it seems that a top-down approach is taken by
the brain. We are much more prone to misinterpreting the speech in
these situations, because without sufficiently clear (noisy) input, we
attempt to match the speech to something we already know. Dennett's
example shows that this is not always very successful.

The question that needs to be asked is whether this top-down method is
taken after the bottom-up method has failed ("I could not understand
that, so I will generalise it to something it might be"), or whether
the brain knows which method to apply before interpretation begins.
Clearly, the second option would be much harder to implement in an AI
system than the first.

> DENNETT:
> It is hopeless, [David] Marr [(a psychologist)] argued to try to
> build cognitive science models from the bottom-up: by first
> modeling the action of neurons (or synapses or the molecular
> chemistry of neurotransmitter production), and then modeling the
> action of cell assemblies, and then tracts, and then whole systems
> (the visual cortex, the hippocampal system, the reticular system).
> You won't be able to see the woods for the trees.

Clark:
I disagree with this view: this is the way that evolution works, and
so is a logical way to start building an artificial intelligence.
Obviously this way will take much longer than a top-down method, but
the result will hopefully be (like the speech interpreting program)
something which can generalise easily, ie. it will be able to perform
tasks outside the range initially programmed into it. With a top-down
approach, you have to start with a definite idea of what you want to
build, eg. a human male, and construct the necessary components
thought needed to build it. With this bottom-up approach, however, it
seems much easier to start building up to something relatively simple,
like a worm or even a human baby. When these small tasks are
accomplished, components can be added on or refined without having to
redesign everything from the top again.

Dennett then describes the three layers of Marr's top-down
methodology: the computational, algorithmic and physical levels.

> DENNETT:
> First,...you had to have a clear vision of what the task or function
> was that the neural machinery was designed to execute. This
> specification was at ...the computational level: it specified "the
> function" the machinery was supposed to compute and an assay of the
> inputs available for that computation.

Clark:
This seems to be entirely the wrong place to start. At a low level,
this might be OK, as you need to know what the artificial neurons or
biochemical processes are supposed to do. However, this would appear
to be a bottom-up approach, and so not what Marr or Dennett had in
mind. This theory may also work fine when considering small components
(such as an eye or the vision system), or when building an AI program
that just performs one or a few relatively simple tasks, ie. not one
designed to be indistinguishable from a human. In these cases, you
know exactly what you want your "machine" to be capable of. However,
when building an artificial human-like brain, there is no way you can
write down, formalise, or even know what it is supposed to be capable
of, except in very broad terms ("thinking", "imagination", etc). I
think that starting to consider what "functions" a human can perform
is either going to result in a (possibly infinitely) long list of
extremely specific functions, or a very short list of functions too
general to implement effectively.

> DENNETT:
> [After the computational level], one could then make progress on the
> next level down, the algorithmic level, by specifying an algorithm
> (one of the many logically possible algorithms) that actually
> computed that function. Here the specification is constrained,
> somewhat, by the molar physical features of the machinery: maximum
> speed of computation, for instance, would restrict the class of
> algorithms, and so would macro-architectural features dictating when
> and under what conditions various subcomponents could interact.

Clark:
In an artificial brain program, I don't think there should be, say,
functions for moving the left arm, the right arm, the left leg, etc.
Obviously these would all call the same code at some stage, but what
is needed is something even more general. For example, a messaging
system which simply took messages (the equivalent of nerve impulses)
from the "brain" to wherever they needed to go (such as a "muscle" in
the left arm) would seem to be enough in this case. Code controlling
the motors in the arm would then interpret these messages, and carry
out an appropriate action. There would be no need for an enormous bank
of algorithms, each designed to compute a specific function. Messages
would be transmitted, and each "receiver" would know how to react to
different messages. This bottom-up approach may also make it easier
for the receiving part of the system to react correctly to new
messages, perhaps in the way that neural networks trained with
unlabelled data can.

Clearly, when adopting either a top-down or bottom-up method of
design, physical limitations have to be taken into consideration. It
is probably easier to consider these in a top-down approach, as
bottom-up is more reliant on emergent properties (those that "appear"
when sufficient components have been interconnected, eg. if you design
an arm from the bottom up by making the bones, joints, muscles, etc.
correctly, movement will be an emergent property [it was not
specifically implemented, but is still present]). Using a top-down
technique, each part of the system can be considered in detail before
it is built, therefore allowing any physical limitations to be
analysed in detail and overcome in some way. With the bottom-up
paradigm, however, there is sometimes no way of knowing exactly what
the various small sub-components will produce when they interact. In
this case, perhaps some sort of intelligent guesswork / trial and
error approach would eventually get a system that overcame the
physical limitations encountered during manufacture.

> DENNETT:
> Finally, with the algorithmic level more or less under control, one
> could address the question of actual implementation at the physical
> level.

Clark:
This is the final stage of the top-down approach, leaving Marr's
methodology very similar to many software engineering techniques. The
problem with software engineering is that many stages are either
counter-intuitive or just unnecessary, and it seems likely that the
same over-reliance on a specific methodology would lead followers away
from the goal instead of towards it, by concentrating on "individual"
pieces without looking at the whole.

> DENNETT:
> If AI is considered as primarily an engineering discipline, whose
> goal is to create intelligent robots or thinking machines, then it
> is quite obvious that standard engineering principles should guide
> the research activity: first you try to describe, as generally as
> possible, the capacities or competences you want to design, and
> then you try to specify, at an abstract level, how you would
> implement these capacities, and then, with these design parameters
> tentatively or defeasibly fixed, you proceed to the nitty-gritty of
> physical realization.
>
> ...The sorts of questions addressed concern, for instance, the
> computation of three-dimensional structure from two-dimensional
> frames of input, the extraction of syntactic and semantic structure
> from symbol strings or acoustic signals, the use of meta-planning in
> the optimization of plans under various constraints, and so forth.
> The task to be accomplished is assumed (or carefully developed, and
> contrasted with alternative tasks or objectives) at the outset, and
> then constraints and problems in the execution of the task are
> identified and dealt with.

Clark:
Why is AI primarily an engineering discipline? Perhaps designing
programs such as expert systems, or simple (relative to a human)
neural networks can be considered solely engineering tasks, but
cognitive science certainly cannot: much can be and has been learnt
from the disciplines of biology, chemistry, psychology and philosophy.
Even if AI / cognitive science was considered "primarily an
engineering discipline", this does not make it "quite obvious" to
follow the standard engineering top-down approach. For a start, we are
reverse engineering something we already have an example of (us)
instead of the usual forward engineering. We must also consider the
question, "why must there be just one way of doing something?" Just
because AI is engineering (if it is), and engineers use a top-down
approach, this doesn't mean that AI must also use this approach, only
that there is a high probability that it should (I don't believe that
AI can be considered just engineering, though, so this probability is
irrelevant to me).

The second paragraph quoted above clearly shows the problem with a
top-down approach. If we are trying to build a T3-passing system, we
can't just look at all the features we need to implement, and work out
how to do it. The number of features would be huge, and when we were
finished, we would almost certainly find things missing. For example,
the vision system in our robot may be able to compute "three-
dimensional structure from two-dimensional frames of input", but it
might not be able to resolve objects at long distances in the way that
we can. We could take apart the system and insert some code that
computed this, but then we would probably find something else missing.
This situation is almost analogous to the one that arises if someone
tries to get around Goedel's theorem by making the Goedel formula an
axiom - another just appears to take its place.

> DENNETT:
> Another expression of much the same set of attitudes is ... my
> characterization of the methodology of AI as the gradual elimination
> of the intentional through a cascade of homunculi. One starts with
> the ideal specification of an agent (a robot, for instance) in terms
> of what the agent ought to know or believe, and want, what
> information-gathering powers it should have, and what capacities for
> (intentional) action. It then becomes an engineering task to design
> such an intentional system, typically by breaking it up into
> organized teams of sub-agents, smaller, more stupid homunculi, until
> finally all the homunculi have been discharged-- replaced by
> machines.

Clark:
This is the homunculus fallacy - the idea that cognition can be
explained by saying that a homunculus (a little man) inside the mind
is actually performing the action of, say, resolving shapes inputted
by the vision system into recognised objects. This turns into an
infinitely recursive question: how is the homunculus performing the
action? If there is another homunculus inside the first, then how is
this one performing its task? Dennett proposes that eventually this
recursion will end, eventually discharging the homunculi by replacing
them with machines.

However, Dennett does not go into any more detail about the
homunculus. He seems to be saying that the task of designing an
artificial intelligence must be broken down into simpler and simpler
parts, until they become trivial to build. However, surely a similar
process must be done with a real intelligence before this engineering
method can be applied? This is where the problem comes in.

Searle addresses this in his paper, "Is The Brain A Digital
Computer?". He points out that there is no use in using progressively
stupider homunculi, eventually reaching the bottom level, where there
are "a whole bunch of a homunculi who just say 'Zero one, zero one'"
(Searle, 1990). This is because in the end only the bottom layer
really exists, "the top layers are all just as-if" (ibid).

The trouble here is that the zeros and ones of the bottom layer are
not intrinsic to the physics of the actual task being performed. They
are merely interpreted as describing this process to an observer.
Thus, the homunculus fallacy cannot be avoided by these means.

Dennett then explains the similarities between Marr's, Newell's and
his own views on AI methodology:

> DENNETT:
> 1. stress on being able (in principle) to specify the function
> computed (the knowledge level or intentional level) independently of
> the other levels.

Clark:
I don't quite see why the mind should be divided up into discrete
levels that can be considered independently. Granted, at some point,
you're going to have to concentrate on implementing physical "neurons"
and "biochemicals", but these are hardly independent from the
"knowledge level" (whatever that is supposed to be - although I
presume this is nearer a conscious mind than a collection of neurons).
Higher levels are a direct result of the "physical layer".

> DENNETT:
> 2. an optimistic assumption of a specific sort of functionalism: one
> that presupposes that the concept of the function of a particular
> cognitive system or subsystem can be specified. (It is the function
> which is to be optimally implemented.)

Clark:
Here, Dennett seems to be saying that you must believe that the
"functions" you are trying to implement can actually BE implemented,
and DO actually exist to be implemented. This seems fairly obvious -
if you didn't have even a small amount of optimism about AI, why would
you bother trying to do it?

> DENNETT:
> 3. A willingness to view psychology or cognitive science as reverse
> engineering in a rather straightforward way.

Clark:
I don't really understand what Dennett means by this. It seems clear
to me that psychology and cognitive science must be reverse
engineering, as it fits the definition: we have an example of
intelligence, but don't know how it works. If we learn enough about
its components and how they work together, we may be able to build
intelligence from scratch. A forward engineering approach would amount
to little more than trial-and-error. I have no idea what "a rather
straightforward way" means. However, Dennett does eventually define
reverse engineering:

> DENNETT:
> Reverse engineering is just what the term implies: the
> interpretation of an already existing artifact by an analysis of the
> design considerations that must have governed its creation.

Clark:
I'm not sure that this is a full definition. When reverse engineering
an artifact, you do have to consider WHY certain components are
present, but the major task is to look at WHAT the components are, and
how they interact. This is especially true when reverse engineering
intelligence, as you are basically reverse engineering evolution.
Evolution operates by (usually) random mutations of genes. If these
mutations result in an organism that is better suited to its
environment, there is a higher chance of the mutated gene being passed
on to the next generation. Recessive genes will become dominant, and
the mutated gene may eventually become a standard part of a species.
However, sometimes mutations arise that have no clear purpose or
effect, or some features which have become superfluous are still
present in an organism. In these cases, it is useless to analyse the
"design considerations" of these features, as they may have no point.
If their operations were examined before considering why they are
actually present, it would be clear that these superfluous features
probably need not be implemented.

> DENNETT:
> ...sometimes engineers put stupid, pointless things in their
> designs, sometimes they forget to remove things that no longer have
> a function, sometimes they overlook retrospectively obvious
> shortcuts. But still, optimality must be the default assumption; if
> the reverse engineers can't assume that there is a good rationale
> for the features they observe, they can't even begin their analysis.

Clark:
And this is exactly what evolution does. But it might be easier to
look at function before considering why it is there in the first
place.

> DENNETT:
> There is a phenomenon analogous to convergent evolution in
> engineering: entirely independent design teams come up with
> virtually the same solution to a design problem. This is not
> surprising, and is even highly predictable, the more constraints
> there are, the better specified the task is.

Clark:
It is hoped in cognitivism that when a T3 (or maybe T4 or T5) passing
robot is created, it will be operating in the same way that we do: we
will then be able to explain our own workings. Of course, this would
only be possible if the goal is specified well enough that there is
little room for multiple solutions. There may be a problem if, say
three robots passed T5. We would not know which one (if any) was
implementing intelligence in the same way as us.

> DENNETT:
> But as Ramachandran and others ... were soon to point out, Marr's
> top-down vision has its own blind spot: it over-idealizes the design
> problem, by presupposing first that one could specify the function
> of vision (or of some other capacity of the brain), and second, that
> this function was optimally executed by the machinery.
>
> That is not the way Mother Nature designs systems. In the
> evolutionary processes of natural selection, goal-specifications are
> not set in advance... Marr and others ... know perfectly well that
> the historical design process of evolution doesn't proceed by an
> exact analogue of the top-down engineering process, and in their
> interpretations of design they are not committing that simple
> fallacy of misimputing history. They have presupposed ... that even
> though the forward processes have been different, the products are
> of the same sort, so that the reverse process of functional analysis
> should work as well on both sorts of product.

Clark:
Ramachandran's criticisms seem similar to my own - it is wrong to
think that there was some grand plan drawn up at the dawn of time, and
evolution has just been following steps to realise that plan.
The interesting argument is whether this means that intelligence can
really be reversed engineered; usually this technique is used on
something that has at one stage been forward engineered.

However, as Dennett points out later, there have been numerous
examples in biology where parts of humans, animals or plants have been
successfully reversed engineered. There does not seem to be any reason
to doubt that the same could not be achieved with the mind. A lack of
forward engineering does not mean that reverse engineering cannot be
performed on an artifact. If no-one had ever seen a toaster before,
and one just fell out of the sky one day (landing on something soft),
we would still be able to take it apart and realise how it worked,
mainly due to the fact that we are familiar with all its components,
and we can see from the design of the toaster how each component
interacts with others. Although we understand much about the
components of the brain, we don't have the same level of knowledge as
we do for toaster components. It seems that until we have a more
complete understanding of neurocircuitry and the biochemical reactions
in the brain, reverse engineering it will be extremely difficult.

> DENNETT:
> When human engineers design something (forward engineering), they
> must guard against a notorious problem: unforeseen side effects.
> When two or more systems, well-designed in isolation, are put into a
> supersystem, this often produces interactions that were not only not
> part of the intended design, but positively harmful... [T]he only
> practical way to guard against unforeseen side effects is to design
> the subsystems to have relatively impenetrable boundaries that
> coincide with the epistemic boundaries of their creators... The set
> of systems having this fundamental abstract architecture is vast and
> interesting, of course, but--and here is [Artificial Life's] most
> persuasive theme--it does not include very many of the systems
> designed by natural selection! The process of evolution is
> notoriously lacking in all foresight; having no foresight,
> unforeseen or unforeseeable side effects are nothing to it...
> Sometimes, ...designs emerge in which systems interact to produce
> more than was aimed at.

Clark:
I agree with Dennett on all of this, except where he says that as
evolution, with no foresight, sometimes produces beneficial side-
effects (through random mutations), this is some sort of advantage to
proponents of Artificial Life (AL). Although getting surprising
results from adopting AL's bottom-up reverse engineering approach (ie.
reversing the same "approach" evolution takes) may well be
interesting, this seems to advocate a trial-and-error style method to
AL. Put some artificial neurons and biochemicals together, and see
what the results are. Unexpected results often lead to breakthroughs,
but they are generally unwanted - we like to know the effects of what
we are producing, and surprising results usually do more to steer us
off the path than carry us further along it.

> DENNETT:
> In biology, one encounters quite crisp anatomical isolation of
> functions (the kidney is entirely distinct from the heart, nerves
> and blood vessels are separate conduits strung through the body),
> and without this readily discernible isolation, reverse engineering
> in biology would no doubt be humanly impossible ...
>
> If we think that biological systems--and cognitive systems in
> particular--are very likely to be composed of such multiple
> function, multiple effect, elements, we must admit the likelihood
> that top-down reverse engineering will simply fail to encounter the
> right designs in its search of design space.

Clark:
Functional isolation is dependant on what "level" of the system you
are looking at. The kidney and heart perform very different functions,
but one could not survive without the other. At a high level, they are
connected. Similarly, at a very low level, the two organs are made up
of much the same materials.

A top-down approach would concentrate on each element in turn,
effectively making many modules or subsystems, that will hopefully all
work together when they are finally "slotted into" the whole
supersystem. In doing so, it would more than likely miss out many of
the lower level dependencies. This is a common problem in large
software engineering projects - many modules are designed by many
different programmers, and when the time comes to connect them all
together, they rarely communicate in the way that was originally
planned. Of course, this approach can also lead to surprise results in
the way that bottom-up can, but as the system has constantly been
considered in terms of its parts rather than the whole, it seems
unlikely that there will be the required level of communication and
dependency between the various subsystems.

Dennett ends by looking at memory - research has shown that short-term
and long-term memory are not distinct parts of the brain, but use the
same tissues with altered connections between them:

> DENNETT:
> One possibility, of course, is that the two functions are just
> neatly superimposed in the same space..., but another possibility
> ... is that this ubiquitous decomposition of function is itself a
> major mistake, and that the same effects can be achieved by
> machinery with entirely different joints. This is the sort of issue
> that can best be explored opportunistically--the same way Mother
> Nature explores--by bottom-up reverse engineering. To traditional
> top-down reverse engineering, this question is almost impervious to
> entry.

Clark:
This again illustrates the problem with trying to break the brain /
mind down into discrete functions (top-down). Whilst we CAN perform
specific functions, such as sight, these are primarily at a high
level - once you look deeper, the "inter-connectedness" of the human
body and especially the brain become apparent. I think that only by
adopting a bottom-up approach to reverse engineering the brain can any
real success be achieved. Even if it turned out that we could use a
top-down approach successfully, its somewhat trial-and-error method
means that the chances of finding the right "combination" are quite
low.

Graham.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST