Re: Harnad (2) on Computation and Cognition

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Mon Mar 27 2000 - 23:23:22 BST


On Mon, 27 Mar 2000, Terry, Mark wrote:

> > HARNAD:
> > The interpretation of a symbol system is not intrinsic to the
> > system; it is projected onto it by the interpreter. This is not true of our
> > thoughts.
>
> There may be some argument here along the lines of "we just interpret
> our thoughts from some internal symbol system, and project a meaning
> onto them".

As long as it's all going on inside your own head, there's no problem.
It's when the interpretation is coming from someone ELSE's head that
there's a problem. But if it's all going on in your own head, and what you
are doing is interpreting your own symbols, the same way you interpret
someone else's symbols, say, in a book, then what we are interested in
is what that interpretative "module" is! It can't just be more symbols.

I assume that's what you mean here too:

> This extra layer of abstraction doesn't actually matter though, as even
> if we give meaning to internal squiggles and squoggles, the
> interpretation is still intrinsic in the system (our brains).

> > HARNAD:
> > We must accordingly be more than just computers. My guess is that
> > the meanings of our symbols are grounded in the substrate of our robotic
> > capacity to interact with that real world of objects, events and states of
> > affairs that our symbols are systematically interpretable as being about.
>
> And computers must therefore be less than us. It is interesting that
> Harnad supposes that interaction is key. Defining what level this
> interaction must occur at would seem an important problem. ie, is being
> told what a donkey looks like enough, or do we have to see a donkey, or
> do we have to see a donkey in the correct context to be able to
> correctly identify another donkey.

I expect that the only reason we can get so much out of seeing things,
or being told what the look like, is because our visual categories are
grounded in our other senses too, and our movements too. A chair is not
just what a chair looks like, but all the other things you can do
to/with it. So don't just think of vision (and of symbolic descriptions
of what things look like), but of all the sensorimotor avenues of
interaction with objects (what you can DO with them being at least as
important as what they LOOK like).

> When we are given some new symbol, the fist thing you want to know is
> what it means. The meaning of the symbol was entirely used to justify
> what we could do with it. The first time I was taught algebra, and the
> notion of "value X" we were taught that it's any number we like, and
> should be treated as such. Maybe I was just taught in a strange way. I
> agree that is isn't just syntax, but I think meaning was crucial in the
> teaching.

This is an important point: Mark is right. Mathematicians PRETEND as if
they are just using and teaching symbol-manipulation rules, but of
course they DO rely on the meanings of the symbols in teaching and in
understanding what they are doing. So not even mathematical
understanding (let alone any other kind of understanding) is merely
squiggling. (I think this is what the mathematician Penrose has in mind
when he insists that mathematical intuition -- e.g., when we are
"seeing" the truth of the Goedel statement "I am true and I am
unprovable" -- is not just symbolic or algorithmic; he too thinks it's
hybrid.)

> > HARNAD:
> > It is easy to pick a bunch of arbitrary symbols and to
> > formulate arbitrary yet systematic syntactic rules for manipulating them,
> > but this does not guarantee that there will be any way to interpret it all
> > so as to make sense
>
> The definition of 'make sense' would be interesting. What makes perfect
> sense to one person may make no sense to the next. Chinese doesn't make
> sense to me, but it does to someone who speaks it. Should the above
> read "make sense to somebody" ?

Good question. But it's not just a matter of making "holistic" sense
(the way it made sense to the man who sniffed laughing gas, that "the
secret of the universe" was "The Smell of Petroleum Pervades
Throughout" or "Life Is Like a Bagel"). Something has to make
SYSTEMATIC sense. Every (syntactically correct) combination of Chinese
symbols makes sense (to anyone who understands Chinese). But it takes
two to tango. The systematic interpretability is a powerful, remarkable
property of nontrivial symbol systems such as Chinese. But even
remarkable is whatever it is that is going on inside a system that
actually understands what they mean. The point is that such a system
cannot be just a symbol system; it has to be hybrid.

> I think most people would assume that the shape of letters and numbers
> are arbitrary in relation to what they actually mean (apart from maybe
> the numbers 1 and 0). As Harnad points out.

Umm, what was that I said about 1 and 0? They're every bit as arbitrary
as "one" and "zero" or "egy" and "nulla"...

Perhaps you mean the beads on an abacus are not completely arbitrary
(just as counting on fingers is not completely arbitrary, or symbolic).
I agree; primitive symbols are sometimes partly iconic. (Words
sometimes even sound like the things they stand for: onomatopoeia.)
That may explain the origins of some notational systems; but once a
symbol is used as a symbol, its resemblance, if any, to what it means
becomes irrelevant (like implementation-independence).

> Harnad then addresses my earlier question about interpretation:
>
> > HARNAD:
> > We may need a successful human interpretation
> > to prove that a given system is indeed doing nontrivial computation, but
> > that is just an epistemic matter. If, in the eye of God, a potential
> > systematic interpretation exists, then the system is computing, whether or
> > not any Man ever finds that interpretation.
>
> Isn't it possible that every symbol system has the potential to be
> systematically interpretable? Can we ever say "there is no systematic
> interpretation to system X" and be guaranteed correctness ?

Perhaps not. But there might be some complexity-theoretic factors here:
A string is nonrandom to the degree to which there is a shorter string
(an algorithm) that could generate it (the shorter the better).

If the only sense in which it is true that every symbol system can be
given a systematic interpretation is that we can make the
interpretation (the mapping of the symbols onto their meanings) as big
as, or bigger than, the symbol system itself, then maybe a nontrivial
symbol system is one whose interpretation is smaller rather than bigger
than the system.

[But that is just a pseudo-formalization of a complexity-theoretic
intuition similar to the one about "duals" in the paper.]

> This is, of course, all leading us towards the hybrid system idea.
> Could our thoughts really be independent from our bodies?

Not quite how I would put it, but ok, I suppose...

> I had problems accepting Searle's test - it always seemed like a trick
> (Can we actually say we understand how _our_ minds process input and
> produce output? No.
> So we no more understand the symbol system going on in our heads than
> we do the memorised pen-pal program. So why is our symbol system the
> only mind present?)

The question wasn't whether we can understand our minds, but whether a
symbol-manipulator can understand Chinese. (If we understood how our
minds process input and produce outcome we could solve all of the
problems of AI by introspection from our armchairs, just from reading
our own minds, and T2 and T3 would have been successfully passed long
ago!)

> At this point I'd like to point out my previous problems with Searle's
> CRA are well and truly wiped out - this is the difference between
> Searle's mind and the program he's memorised.

Always nice to welcome a new convert to the hybrid fold!

> Harnad goes on to point out that we could simulate a T3 robot, but it
> still wouldn't be thinking, it would still be ungrounded symbol
> manipulation. Only by interacting with the real world and grounding its
> understanding in what it interacts with can something be said to be
> cognizing. This seems to fit in with my understanding of how people
> work. We can of course imagine worlds different from our own,
> inventions not yet real etc. However, all these things must be based on
> the world we know. Otherwise, such things would make no sense to us.

It's not about other possible worlds, or worlds we imagine are
possible; it's about the difference between an object, on the one hand,
and a squiggle-system that is systematically interpretable as that
object, on the other.

> > HARNAD
> > For flying and heating, unlike computation, are clearly not
> > implementation-independent. The [property that is] shared by all things
> > that fly is that they obey the same sets of differential equations,
> > not that they implement the same symbol systems. The test, if you
> > think otherwise, is to try to heat your house or get to Seattle with the
> > one that implements the right symbol system but obeys the wrong set of
> > differential equations.
>
> At this point you may well be thinking "But flying / being hot are
> physical states. Thinking is a mental state". So what is a mental state
> if it is anything more than a physical thing? This is back to the Turing
> test, and if there is indeed some other thing present, we will never be
> able to produce machines that think.

It's back to T3, which is also a physical thing. (But, of course,
because of the other-minds-problem, and T3's impenetrability to
Searle's Periscope, we can never be sure a T3 has a mind, the way
we can be sure a T2 doesn't -- if it's just a symbol-cruncher.)

> It's interesting to note that their would be no
> need to stop at our 5 senses when designing a robot - why not
> incoporate the bats sonar as well?

Whatever it takes; but since we pass (human) T3 without sonar, maybe
there's no point adding that too...

Good job Mark -- but remember to use a spell-checker first, next
time...

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT