Re: Dennett: Cognitive Science as Reverse Engineering

From: Yusuf Larry (
Date: Tue May 01 2001 - 18:47:21 BST

>My understanding of top-down and bottom-up modeling is that with top-down,
>you begin with a starting symbol and apply rules to it until you find it's
>meaning. With bottom-up, you begin with a set of symbols, i.e., a
>sentence, and break it down into its sub-components until you understand
>it. Here, Dennett uses the analogy of language comprehension to
>demonstrate the difference. He suggests that when we hear someone speaking
>the words are viewed as input into the brain and processed in a bottom-up
>fashion - starting with a sentence and breaking it down into its
>sub-components. But the knowledge that we have in our brains prior to the
>sentence being fed in as input means the sentence is dealt with in a
>top-down fashion as well - taking the individual components and applying
>rules to them.

>I did not think this was a particularly enlightening analogy. Perhaps the
>author would like to use two separate analogies to demonstrate the

Yusuf L:
Maybe it wasn't such a great analogy, but surely the fact that humans
capable of performing top-down and bottom-up on the same problem is
considering computer science and AI has been based on making a choice
between the
two methods rather than using the two in unison.

>>There is no controversy, so far as I know, about the need for this dual
>>source of determination, but only about their relative importance, and
>>when, where, and how the top-down influences are achieved. For instance,
>>speech perception cannot be entirely data-driven because not only are the
>>brains of those who know no Chinese not driven by Chinese speech in the
>>same ways as the brains of those who are native Chinese speakers, but
>>also, those who know Chinese but are ignorant of, or bored by, chess-
>>talk, have brains that will not respond to Chinese chess-talk in the way
>>the brains of Chinese-speaking chess-mavens are. This is true even at the
>>level of perception: what you hear--and not just whether you notice
>>ambiguities, are and susceptible to garden- path parsings, for
>>instance--is in some measure a function of what sorts of expectations you
>>are equipped to have.

>Here, Dennett explains that speech perception cannot be entirely
>data-driven, and to back-up his claim he points out that our brains are
>equipped to deal with speech recognition, however this does not
>automatically mean that we can understand all speech. If we can speak only
>English, we cannot understand Chinese. He further demonstrates that if we
>speak English but are not interested in football, and someone tries to
>talk to us about football, then we will understand the vocabulary, but not
>necessarily the content, i.e. I do not understand a great deal about
>football, and if someone talks to me about it, I can understand the words
>they use, but I do not understand for instance, the "off-side" rule that
>might come up in conversation. I understand 'off' and 'side', but do not
>understand the combination of the two. The combinations change the context
>of the words.

Yusuf L:
Totally Agree. The problem of understanding language; not just the words
but the context of the words
being used has plagued AI for decades. A very interesting question would
be how to implement a machine
 that can pick out the context, and then interpret the speech based on
the context, without hitting the frame
 problem (through building up its knowledge of every interpretation
possible in every context). I doubt that the
use of symbol grounding in a T3 candidate would help because knowing
what a football is and the game of
football does not mean that the machine would be able to understand the
off-side rule.

I suspect Harnad would say, why worry? Most humans do not know the
off-side rule and so why should one expect the T3 candidate to know .
However, following Turing's indistinguishability thesis, if the T3
candidate was tested against the human that knew what the off-side rule
was, surely it has failed the TT or has it?

This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST