Re: Harnad (2) on Computation and Cognition

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Tue Apr 11 2000 - 12:20:46 BST


[Nick: You quoted too much. Only quote the part you are commenting on,
or it inflates the text and makes it too hard to follow through. SH]

On Mon, 3 Apr 2000, Worrall, Nicholas wrote:

> > TERRY:
> > There may be some argument here along the lines of "we just interpret
> > our thoughts from some internal symbol system, and project a meaning
> > onto them".
> > This extra layer of abstraction doesn't actually matter though, as even
> > if we give meaning to internal squiggles and squoggles, the
> > interpretation is still intrinsic in the system (our brains).
>
> WORRALL:
> I think here that the significant point is that the thoughts are not based
> upon the system for the outcome, given the system, then the thoughts are
> independent of this. Projecting meaning onto an internal system is true, as
> we can see from the learning and ideas of a new born child.

We project meanings (from our heads) onto the symbols in an external
book. The meanings of those symbols are then grounded, indirectly, in
the meanings in our own heads. This strategy obviously does not work
when the symbols are not in a book, but in our heads, for that would
lead to an infinite regress (part of the head projects onto another
part: well what is going on in the part that the projecting is coming
from? can't be just more projecting....).

This is related to the homunculus problem in mental imagery: There are
no doubt images (pictures) in my head; but if so, who/what is looking at
them? A little man in my head (a "homunculus")? But then what is going
on in HIS head? Can't be just more images (because that just leads to
more homunculuses...).

> WORRALL:
> Would it be possible to have a number of levels...
> say that parts of a physical system could be formally
> equivalent to computers.

Yes, and this is what has come to be called "modularity" -- that there
may be autonomous, self-sufficient components (in the brain, or the
T3-passing device). The trouble with modularity (though it can certainly
exist) is that it goes against the spirit of the Turing Test, where the
"T" stands as much for "Total" as for Turing and Test. For modules are
by definition subtotal, they cannot do it all; so they would be
Turing-distinguishable: How could we ever know whether we were really
dealing with an autonomous module, or just a toy model for an arbitrary
fragment of T3 capacity? The system would have to go the distance
(perhaps making use of modules along the way) to T3 in order to prove
itself.

> > TERRY:
> > When we are given some new symbol, the fist thing you want to know is
> > what it means. The meaning of the symbol was entirely used to justify
> > what we could do with it. The first time I was taught algebra, and the
> > notion of "value X" we were taught that it's any number we like, and
> > should be treated as such. Maybe I was just taught in a strange way. I
> > agree that is isn't just syntax, but I think meaning was crucial in the
> > teaching.
>
> WORRALL
> Given a new symbol, it is often enough to know what is around it to
> work out the meaning of the given symbol. Given mathematics and the
> example 3#1=4, here we can say that the symbol # is the addition
> manipulator. But to know this we must also know the meaning of
> all the other symbols and so is limited.

Of course meaning and context is used when you are taught, or you figure
out, the meaning of a symbol. But that meaning is not used in the actual
symbol manipulations: They are done using rules that are based on the
(arbitrary) SHAPES of the symbols, not their meanings.

> >> HARNAD:
> >> It is easy to pick a bunch of arbitrary symbols and to
> >> formulate arbitrary yet systematic syntactic rules for manipulating them,
> >> but this does not guarantee that there will be any way to interpret it all
> >> so as to make sense (Harnad 1994b).
>
> > TERRY:
> > The definition of 'make sense' would be interesting. What makes perfect
> > sense to one person may make no sense to the next. Chinese doesn't make
> > sense to me, but it does to someone who speaks it. Should the above
> > read "make sense to somebody" ?
>
> WORRALL:
> I think that Harnad here is trying to make the point that given a totally new
> symbol system that has been invented by something then this would be
> garbage to anyone else but the person who invented it. It takes us back
> to the problem of whether the symbol system could be interpreted by
> context words such as a dictionary with the word 'dictionary' on it.

Actually, no, neither of you has it quite right. The criterion is that
as symbol system must be SYSTEMATICALLY INTERPRETABLE, not that someone
actually has to interpret it. (But of course if none of us knows the
interpretation, we can't know whether it's really a nontrivial symbol
system or just gibberish).

But "interpretable" really means something like "decipherable." It's not
enough that someone says "Wow, ya, that all makes systematic sense to
me!" A person can be high on something an the scratches on the wall
could seem to be making sense to him!.

> > TERRY:
> > I think most people would assume that the shape of letters and numbers
> > are arbitrary in relation to what they actually mean (apart from maybe
> > the numbers 1 and 0). As Harnad points out.
>
> WORRALL:
> I think that these types of symbols evolve as the systems are used, take
> our language system, certain symbols such as ' : ) ' and '; )' pertain
> to what something looks like.

Symbols may have analogue properties that are related to what they stand
for (by resemblance, like a circle standing for the moon, or even causal
connection, like a footprint standing for whoever made it), but these
analogue properties are irrelevant and cannot enter into the rules for
using the symbols and symbols, in a symbol system. When it comes to
symbol manipulation, the symbols may as well have been just 0 and 1,
with no resemblance or causal connection to anything.

> WORRALL:
> As I argued earlier, a given symbol system may be interpretable only to
> one entity, from that if only one entity can interpret it then it must be
> a symbol system.

You didn't argue that, you stated it! If you can decipher a code, you
should be able to explain the systematic interpretation to anyone and
everyone, symbol by symbol. If you cannot, if it makes sense only to
you, then chances are it does not really make systematic sense at all.

> WORRALL:
> Given a syntactic system then for any given meaning symbols may be
> introduced that have a the same meaning as the ones which are being
> substituted, then the computation would have the same meaning.

Not quite sure what you mean. Meaning's not part of a symbol system;
you could swap one arbitrary symbol shape for another; the
shape-manipulation rules (formal, syntactic) would be the same. So if
the first system was systematically interpretable, the second would be
too. Only arbitrary shapes would have changed (like changing a
notational system).

> > TERRY:
> > I had problems accepting Searle's test - it always seemed like a trick
> > (Can we actually say we understand how _our_ minds process input and
> > produce output? No.
> > So we no more understand the symbol system going on in our heads that
> > we do the memorised pen-pal program. So why is our symbol system the
> > only mind present?)
>
> WORRALL:
> I agree here with Terry, we can never know how the mind works from
> input to output, and every mind probably interprets the inputs in
> different ways to the next mind. The input and output might be the same
> although the computation on those may be differing.

Of course we don't understand how our minds work (otherwise we could do
all of AI by just sitting in an armchair and introspecting about how we
pass TT). But when it comes to understanding Chinese, all you need to be
able to do is know THAT you do or do not understand Chinese (or
English), not HOW. And that's all Searle needs or uses, in his
argument.

HARNAD, Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT