Re: Harnad (2) on Computation and Cognition

From: HARNAD, Stevan (harnad@coglit.ecs.soton.ac.uk)
Date: Mon Apr 17 2000 - 20:48:49 BST


On Fri, 7 Apr 2000, Pentland, Gary wrote:

> symbols are grounded, even if they only are in our own minds

Yes, but if we are trying to model and explain HOW they are grounded,
we have to ground them directly and autonomously, without no mediation
by other minds.

> How many interactions do you need to successfully ground all symbols?

Good question. And how many symbols do you need to ground to pass T3?
Who knows?

> If we can understand a Goedel sentence (not computable) then CTT does not
> apply to us

Not true. Goedel's proved that there were true statements that were not
provable; provable does not equal: "understandable" or "recognizable as
true."

> It would be nice if Harnad would define what HE means by equivalent to a
> computer.

I never use the phrase "equivalent to a computer." All I say is that
the right computer programme can make the states of a computer
equivalent to the states of (just about) any other physical system.
"Equivalent" here means to be systematically interpretable as
corresponding state-for-state, one-to-one (to as close an approximation
as you like).

> But isn't computation "semanticaly interpretable", attaching a meaning to
> the squiggles and squoggles.

Semantically interpretable to/by us, but in and of and to itself,
meaningless (like the words on a page). Not so in the case of our
thoughts.

> but isn't Harnad stating that the interpretation must be consistent,
> an inconsistent system will never make sense to anyone.

Sometimes a contradiction can be contained, like a benign tumour,
technically; but, yes, in general an inconsistent system will only make
sense as long as you don't look at it closely enough.

> Trivial systems, why mention them, we are only interested in systems that
> have a use, are interpretable aren't we?

Yes, but we needed to distinguish the case of the "symbol systems" that
have no interpretation from those that do; otherwise we might have
thought that all you needed was the symbols and symbol manipulation
rules, and not the meaningfulness.

> Surely if you swap the meanings of words you would be speaking a new
> language (TERRYESE?) or merely Chinese incorrectly, depending on the
> number of words you change, someone may still understand you.

Why not try it? Just pick two words (say, "two" and "words") and swap
their meanings. Now grep a big text for instances of "two" and "words"
and show me how they all make systematic sense with the swapped
meanings. Here's a start: "The sentence has more than words than two."

> But what does "0" mean? Does it have a value, or is it just
> relative to 1 and -1?

If "1" stands for the degree to which 3 things are more things than
two things, then "0" stands for the degree to which 3 things are more
things than 3 things.

But what do you mean "just relative"?

> Does the symbol grounding problem solve itself by being grounded from
> within the system, but interpretable outside

It's not solved till you show HOW a system can be grounded "from within
the system." So far, that just sounds like squiggles and squoggles,
whether or not they are interpretable outside the the system...

> the Turing test is flawed, if you pass the penpel test, do you have a
> mind or just a piece of software that can pass the penpal test? Turing has
> a good insight, but it is not enough to proove that a system has a mind,
> in fact can you proove that at all?

(1) The TT is not a proof. It just reminds you not to ask for MORE of a
model than you ask of on another.

(2) Remember to use a spell-checker before posting!

> a simulation of a mind (or robot) could pass T2 or T3 but
> would it be real? The Chinese room has fallen over at this point.

Maybe yes, maybe no; but if it's just a symbol system passing T2, then
probably not (because of the symbol grounding problem and Searle's
periscope).

> The physical state could be simulated, so if a mental state is a physical
> state it could also be simulated. Is simulation good enough, it may well
> pass the Turning test (T3), but if we know its a simulation then we know
> it's not real and therefore doesn't have a mind, again how do you proove
> that somthing HAS a real mind?

What do you mean by "simulation"? And how does something's a simulation
help you know whether it's is real or not? The way I know a fire is not
real is from the fact that it doesn't burn, not from the fact that it's
a simulation. If my car comes to a halt because something's broken in
it, and you can stick in something in place of the thing that's broken
that's just a "simulation" of it, yet it gets my car going again, does
that still mean it wasn't "real"?

> I wish that someone could work on the
> Turing test and improve on it. Even another way of
> suggesting that something has a mind, proof I think will be impossible.

All suggestions welcome!

[Gary, too much needless quoting: Only quote what you want to comment on
-- and whatever extra context is needed to understand it -- but no
more.]

Stevan



This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:28 GMT