Re: Dennett: Cognitive Science as Reverse Engineering

From: Edvaldsson Ragnar (raggi@btinternet.com)
Date: Thu May 24 2001 - 02:09:26 BST


Edvaldsson:
This paper by Dennett tries to explain the difference between top-down
and bottom-up processess needed to accomplish some form of AI. He
draws on examples both from the biological world (humans) and from the
world of computation. His paper is pretty good except he uses some
fancy rearly used words a bit too often, which reduces the clarity o
fthe paper.

> DENNETT:
> When a person perceives (and comprehends) speech, processes occur
> in the brain which must be partly determined bottom-up, by the input
> and partly determined top-down, by effects from on high,
> such as interpretive dispositions in the perceiver due to the
> perceiver's particular knowledge and interests.
> (Much the same contrast, which of course is redolent of
> Kantian themes,
> is made by the terms "data-driven" and "expectation-driven").

> DENNETT:
> There is no controversy, so far as I know, about the need for this
> dual source of determination, but only about their relative importance,
> and when, where, and how the top-down influences are achieved.

Edvaldsson:
The author is stating that the human brain is not a one way process
when we hear and interpred speech. But instead we seem to have at least
two layers of processing if you will, the lower part which is the
actual physical system that "hears" the speech in the outside world and
then after processing the input passes the message/speech on to the
higher level layer. The higher level layer responds to the input
depending on the individuals interest in the input. In this setup
could be descibed very simplified as being something like a modem and a
PC. The modem is the lower layer which reiceives anlaogoue signal
converts it into a digital signal and passes it on to the Pc which
represents the higher level. I agree with this point by Dennett
although we don't have a clear definitons of the processes that occur
in the human brain and where top-down takes over from bottom-up, he
goes on to give us some examples.

Edvaldsson
I would have liked a clearer definition on what kantian themes are,
which Dennett refers to. According to www.dictonary.com Kant was a
German idealist philosopher who argued that reason is the means by
which the phenomena of experience are translated into understanding. By
"data driven" he is refering to the bottom-up part of the process which
is driven by the input (speech) and "expectation-driven" which is the
top-down process that only deals with the processed input that it finds
of relevance.

Edvaldsson:
Dennett gives us two examples of too much top-down components. The
first one is about an old philosopher who has trouble hearing and when
introduced to a colleague who was a proffesor of business ethics he
doesnt seem to hear what his colleague is proffesor in. After some
while the old philosopher gives up and claims that he cant hear him and
it sounds just like business ethics. The other example Dennett gives is
a similiar too much top-down but this time it is a computer who
misunderstands. An AI speech-understanding system which is used as an
interface to a chess program is being demonstrated. The user is simply
going to have to say which pieces he want to move in the chess game and
the computer will do the rest. However before making the first move the
user clears his throat and the computer took it as "Pawn to King-4"

> DENNETT:
> In these contexts, the trade-off between top-down and bottom-up is a design
> parameter of a model that might, in principle, be tuned to fit the
> circumstances. You might well want the computer to "hear" "Con to Ping-4"
> as "pawn to King-4" without even recognizing that it was making an
> improvement on the input. In these contexts, "top-down" refers to a
> contribution from "on high"--from the central, topmost information
> stores--to what is coming "up" from the transducers or sense organs

Edvaldsson:
Dennett here explains that the relationship between the stored
information (top) and the input (bottom). I we could think of the
amount of top-down contribution as a variable then if we would set it
on low then the mathcing would be very tollerant as he explains with
"Con to Ping-4" being understood as "pawn to King-4". If we would set
the top-down influence on high then the input must match the stored
information more closely to be understood.

>> DENNETT (quoting MARR):
>> It is hopeless to try to build cognitive science models from
>> the bottom-up: by first modeling the action of neurons
>> (or synapses or the molecular chemistry of neurotransmitter production),
>> and then modeling the action of cell assemblies, and then tracts,
>> and then whole systems
>> (the visual cortex, the hippocampal system, the reticular system).
>> You won't be able to see the woods for the trees.

Edvaldsson:
Dennett here quotes Marr who has done exstensive research on tehory of
vision. Marr's theory of vision is an good example of purely bottom-up
process and has managed to get far much out of pure data (input) than
previously assumed. But Marr also claims in that a pure bottom-up can
only get us so far in developing a cognitive model. Marr came up with
the top-down vision of the three-level cascade of the computational,
the algorithmic and the physical level.

> DENNETT:
>First, he insisted, you had to have a clear vision of what the task or
>function was that the neural machinery was designed to execute.
>This specification was at what he called, misleadingly,
>the computational level: it specified "the function" the machinery was supposed
>to compute and an assay of the inputs available for that computation.
>With the computational level specification in hand, he claimed,
>one could then make progress on the next level down, the algorithmic level,
>by specifying an algorithm (one of the many logically possible algorithms)
>that actually computed that function. Here the specification is constrained,
>somewhat, by the molar physical features of the machinery:
>maximum speed of computation, for instance, would restrict the class of algorithms,
>and so would macro-architectural features dictating when and under what conditions
>various subcomponents could interact. Finally, with the algorithmic level more or
>less under control, one could address the question of actual
> implementation at the physical level.

Edvaldsson:
As Dennett later points out this follows general engineering principles,
first decide and understand what it is you want to design. Then design
it and decide how you are going to implement it and finaly build it.
Dennett then goes on to give examples of reverse engineering as that is
needed if we want to be able what it is that we want to design. I agree
with Dennett that the word computational is a misleading word for this
first level of design proposed by Marr. Dennett then points out that
reverse engineering is not neccesarily the most optimized way of
achieving the goal. For example would it serve any purpose if we where
to build an copy of a human to include the appendix, an organ that is
not usefull to modern humans and is potentially deadly if it bursts?

> DENNETT:
>But as Ramachandran (1985) and others (e.g., Hofstadter--see Dennett, 1987)
>were soon to point out, Marr's top-down vision has its own blind spot:
>it over-idealizes the design problem, by presupposing first that one could specify
>the function of vision (or of some other capacity of the brain), and second,
>that this function was optimally executed by the machinery.

Edvaldsson:
This is an interesting point that Dennett takes further.

> DENNETT:
> That is not the way Mother Nature designs systems.
> In the evolutionary processes of natural selection, goal-specifications
> are not set in advance--problems are not formulated and then proposed,
> and no selective forces guarantee optimal "solutions" in any case.
> .......
> They have presupposed, however--and this is the target of a more
> interesting and defensible objection--that in spite of the difference
> in the design processes, reverse engineering is just as applicable
> a methodology to systems designed by Nature, as to systems designed
> by engineers. Their presupposition, in other words, has been that
> even though the forward processes have been different,
> the products are of the same sort, so that the reverse process of
> functional analysis should work as well on both sorts of product.

Edvaldsson:
Right on the spot, if we leave God out of the picture then this first
point must hold. The second point is correct as well, even though life
evolves by trial and error then it does not mean that the final product
can't be copied just as it is not neccesary to re-invent the wheel if
you want to build a car.

> DENNETT:
> When human engineers design something (forward engineering), they must
> guard against a notorious problem: unforeseen side effects. When two or
> more systems, well-designed in isolation, are put into a supersystem,
> this often produces interactions that were not only not part of the
> intended design, but positively harmful; the activity of one system
> inadvertently clobbers the activity of the other.
> ....
> In short, you attempt to insulate the subsystems from each other, and
> insist on an overall design in which each subsystem has a single,
> well-defined function within the whole.
> ....
> The process of evolution is notoriously lacking in all foresight;
> having no foresight, unforeseen or unforeseeable side effects are
> nothing to it; it proceeds, unlike human engineers, via the profligate
> process of creating vast numbers of relatively uninsulated designs,
> most of which, of course, are hopelessly flawed because of
> self-defeating side effects, but a few of which, by dumb luck, are
> spared that ignominious fate. Moreover, this apparently inefficient
> design philosophy carries a tremendous bonus that is relatively
> unavailable to the more efficient, top-down process of human engineers:
> thanks to its having no bias against unexamined side effects, it can
> take advantage of the very rare cases where beneficial serendipitous
> side effects emerge.
> ....
> Sometimes, that is, designs emerge in which systems interact to
> produce more than was aimed at. In particular (but not exclusively) one
> gets elements in such systems that have multiple functions.

Edvaldsson:
This is interresting as this is the very difference in which nature
"desings" and "updates" its current lifeforms and how humans design and
implement ideas. The main issue is though that with the way of nature
we can get elements with multiple functions as subsystems are not all
insulated from each other. If we have many occurances of this in the
cognitive system then it makes it really hard to copy the system in a
top-down manner as Dennett explains.

> DENNETT:
> If we think that biological systems--and cognitive systems in
> particular--are very likely to be composed of such multiple function,
> multiple effect, elements, we must admit the likelihood that top-down
> reverse engineering will simply fail to encounter the right designs in
> its search of design space. Artificial Life, then, promises to improve
> the epistemic position of researchers by opening up different regions
> of design space--and these regions include the regions in which
> successful AI is itself apt to be found!

Edvaldsson:
The argument has gone almost full circle here, bottom-up has its
disadvantages and because the human mind is not split up into clear and
insulated components it will be difficult if not impossible to copy
using pure top-down methodology, AL could possibly bring new options in
how to go about solving the problem. Dennett mentions that most people
imagine that the mind can be split into three different components, the
proccessing, the long time memory and short time memory. However
according to Dennett.

> DENNETT:
> But nothing we know in functional neuroanatomy suggests anything like
> this division into separate workspace and memory. On the contrary,
> the sort of crude evidence we now have about activity in the cerebral
> cortex suggests that the very same tissues that are responsible for
> long term memory, thanks to relatively permanent adjustments of the
> connections, are also responsible, thanks to relatively fleeting
> relationships that are set up, for the transient representations that
> must be involved in perception and "thought".
> ....
> This is the sort of issue that can best be explored
> opportunistically--the same way Mother Nature explores--by bottom-up
> reverse engineering. To traditional top-down reverse engineering,
> this question is almost impervious to entry.

Edvaldsson:
This is a clear case if true that the components of the mind are not
insulated and can have multiple functions which makes top-down
methodology extremely hard to follow.

Edvaldsson:
This paper by Dennett is an very interesting one, however I feel it is
a bit pessimistic and sometimes it is easy to feel that there is no
solution and AI will never be achieved. Personally I feel that
Artifical Life could shed some new light on the issue of cognition.
Maybe it is just a question of starting to implement ideas instead of
trying to design a model of the mind because as we can see from this
paper, the human brain is not implemented from a well designed model
with loads of diagrams, instead it is a hack that nature has put
together over a long long period of time.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST