On Tue, 14 Mar 2000, Boardman, Adam wrote:
> He applies his argument to Schank's relatively simple program that simulates
> the ability of the mind to understand stories. In particular it only
> understands stories about restaurants. The understanding was reduced to such
> things as basic food stuffs, food quality's, possible reactions, tips and
> payment of the bill. The program is then asked questions about the story,
> 'Was the foodstuff eaten?', to which it gives correct answers.
Yes, as an example (and because such "toy" programs -- t1 -- are all we
have so far); but it would apply exactly the same way to a full T2-scale
program, as you later note:
> I would be quite surprised if there really were many people stating that a
> program that can pass t1 (the toy level of the Turing Test) would be
> considered to literally understand, perhaps we should consider a program
> that can pass T2 level (40 years of conversation as a pen pal).
> If we then apply this argument to a T2 simulation with a single program and
> no further tweaking from programmers, then Searle is assuming that such a
> feat is possible within an order of complexity that it could be internalised
> in a human or executed in a room such that a pen pal would believe it to be
> a mind that it was conversing with. Say your talking about a 40 year T2 test.
>
> if the answers take 2 years for Searle to 'calculate' in his room then the
> human conversing with it is unlikely to consider it to have a mind.
Pen-pals can take long (yes, years sometimes) to respond too. T2 is an
off-line task.
And the fact that Searle is slower, or wouldn't have the memory for all
of it -- do you think that's a way to save Strong AI? Since we're only
supposing ("suppose we had a T2-passing program," "suppose Searle
implemented it," etc.), why not suppose Searle was faster? Is mind to be
just a matter of processing speed? (How fast, then?)
Or just let Searle do part of it, since he's too slow to do it all:
Should he be getting a PARTIAL understanding of Chinese?
No, it should be obvious that speed and capacity are not really relevant
here; just the simple principle that anyone can do the squiggles, but
that won't make them understand...
> If we
> assume that the instructions are simple enough to be executed quickly then
> Searle would soon come to learn the language he is conversing in. His
> understanding wouldn't affect the answers he is giving, they remain
> determined by his instructions.
Coming to understand the meaning of the squiggles may or may not be
possible. (With the help of some decryption tools, maybe it would be.)
But it would be irrelevant, because in the case of the computer, the
understanding was supposed to be implemented by the implementing of the
squiggle system; the computer wasn't supposed to be gradually LEARNING
to understand Chinese after some time implementing the symbol
manipulation...
> Searle's response to [the SYSTEM REPLY]
> is to get the individual to internalise the
> all the elements of the system. This is feasible on a t1 level where
> everything is nice and simple. But to internalise a T2 passing program,
> which would presumably be more complex than any program currently written,
> Imagine trying to internalise and 'invaluably' run a 300 Mb install of any
> Microsoft software. Even trying to internalise a dictionary is the kind of
> thing that only an elite few humans can manage.
True, but is it relevant?
> > SEARLE:
> > II. The Robot Reply (Yale).
>
> This is in effect a kind of T3/T2 suggestion and would require another order
> of magnitude in the programming complexity. Searle now suggests that the
> adding of perceptual and motor capacities adds nothing by way of
> understanding.
And if it's still just T2 (but with the robot bits just arbitrarily
added on, untested), he's right. But if the test becomes T3, then that
is immune to his Chinese Room Argument.
How could Searle BE the whole T3 system while still lacking... (what?).
> He changes his thought experiment so that now in addition to
> the Chinese symbols there is a stream of symbols representing the sensory
> data coming from the robot and some of the answers cause the movement of
> the robot.
That's not T3. That's a simulation of a SIMULATION of T3, which is all
just squiggles and squoggles, just as a simulated plane is.
> timing becomes crucially important because if your robot comes to a stair
You're absolutely right that timing is important for T3, because T3
capacity is a real-time, real-world capacity. (Timing was irrelevant for
T2, which is offline; also irrelevant for virtual T3, which is likewise
not real-time, and likewise just squiggling.)
> I suggest that you would need a multiple Searle's to handle
> a robot by symbol manipulation if it is to be able to react with any
> immediacy to its environment.
It's worse than that: T3 is not only about being able to UNDERSTAND but
being able to DO. DOING is not implementation-independent; neither is
having the wherewithal to DO. So Searle would have to either leave the
doing out (and just do the squiggling) -- in which the System Reply
becomes correct -- or he would have to do what the T3 robot does (in
which case there would be nothing missing, unlike in the case of the
pen-palling with the understanding missing).
> > SEARLE:
> > If we had to know how the brain worked to do AI, we wouldn't bother
> > with AI.
Unless of course the only way to understand the brain would be to first
try to model it too (Weak neuro-AI?).
> This is a bit simplistic, the idea that you could construct enough pipes
> appropriately to exactly copy a Chinese mind in such a way that each is
> identifiable and navigable by a man, seems rather improbable. To merely set
> the initial values of all valves would take a few months, if not years.
You're repeating your time/capacity objection again here, but a stronger
one is that pipes have nothing to do with this. The FULL force of the
Chinese Room Argument is there only when he has completely internalized
the system. With T2, that was internalizing the T2-passing programme. If
that program requires some neural simulation algorithms too, fine, no
need to build pipes. Just simulate them computationally, and then
implement the squiggling. Punch-line is exactly the same (because this
is not T4, any more than the partial or virtual robot is T3).
> > SEARLE:
> > But the man certainly doesn't
> > understand Chinese, and neither do the water pipes
>
> This is like expecting our individual neurons to understand what their
> doing. Also internalising a whole human brain inside another at a biological
> level would be even more difficult than that of a computer program.
He means (or should mean) the whole system, not the individual pipes
(neurons), and as I said, he can simulate it all. (Again, his slowness
is not a substantive factor; you don't have any theory or evidence of
how speed makes mind, any more than symbols make mind.)
> > SEARLE:
> > IV. The combination reply (Berkeley and Stanford).
> > imagine the computer programmed with all the synapses of a human
> > brain, imagine the whole behaviour of the robot is indistinguishable from
> > human behaviour
>
> Searle agrees that in this case it becomes rational and irresistible to
> accept the hypothesis. But claims that it still wouldn't have a mind.
Sounds like real T3 (using some virtual T4 algorithms). Searle cannot BE
this System, only part of it, so all bets are off.
> Searle then mentions 'The other minds reply' which states that we can never
> know that another being has a mind but by being the other being, so to
> consider another human to have a mind you must also consider a computer to
> also. Which he counters by saying that we know that simple computational
> processes don't have minds so why should complex ones.
We know no such thing. (Maybe a PC running MSDOS has a mind; who
knows?) Searle's Periscope only comes into its own with T2-passing
programs about which we want to claim that any and every implementation
of them will understand Chinese (if any of them does). Then Searle can
implement it and show it doesn't.
> > SEARLE:
> > Now why couldn't we give those somethings, whatever they are, to
> > a machine?
>
> Searle has no reason why we should not be able to do this, but we don't know
> what it is and he believes it cannot be defined in terms of a computer
> program.
Remember the Hexter quote:
"in an academic generation a little overaddicted to "politesse," it
may be worth saying that violent destruction is not necessarily
worthless and futile. Even though it leaves doubt about the right
road for London, it helps if someone rips up, however violently, a
`To London' sign on the Dover cliffs pointing south..." Hexter
(1979)
Searle shows us what cognition ISN'T (it's not just computation), but he
does not show us what it IS. We will move on to the Symbol Grounding
Problem and Hybrid Systems for some possibilities in that direction.
> > SEARLE:
> > Because the formal symbol manipulations by themselves don't have any
> > intentionality; they are quite meaningless; they aren't even symbol
> > manipulations, since the symbols don't symbolize anything. In the
> > linguistic jargon, they have only a syntax but no semantics. Such
> > intentionality as computers appear to have is solely in the minds of
> > those who program them and those who use them, those who send in the
> > input and those who interpret the output.
>
> This sounds pretty good the symbols are ungrounded, the best way of
> enabling the computer to understand the meaning of its symbols is to do it
> the human way, learning. Get your computer to evolve and learn, start it as
> an amoeba and work its way up, wouldn't it then have a mind?
That can't be it, for learning is perfectly within the scope of T2! Ask
me what any symbol means and I'll TELL you!
(Now what's wrong with that?)
See:
Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad90.sgproblem.html
> It seems entirely plausible that the act of memorising such a program might
> well teach one Chinese, or at least that to do with restaurant story's.
Plausible, but irrelevant. (A computer implementing a program does not
LEARN what it's program means; if it ever knows at all, it should know
it just in virtue of implementing it.)
> A computer simulation of understanding doesn't need to understand anything
> to be useful so why should it!
Because we may want to understand understanding.
And because understanding understanding may help us design something
that can understand, hence do, even more...
> So to conclude Searle's arguments demonstrate that a t1 passing program can
> not be said to have a mind, and that T2 and T3 passing programs depending on
> how they are written also don't have minds.
Actually, Searle never really considers T3 at all; his robot reply is
not to T3, but to T2-plus-peripherals. T3 never gets tested. If it did,
Searle could no longer implement the entire (hybrid) system that passes it
-- only the computational part.
> However I think that some of his
> arguments are doubtful and that a T2 or T3 passing program that has evolved
> and had a 'growing up' and education could have minds with intentionality
> and causality.
Let's leave aside T3. It has strengths that need not even make use of
growing up. Learning and education are fully possible in T2. Does that
help a pure symbol-cruncher out of the Chinese Room?
> Why a 'story comprehension' simulating program could possibly be expected
> to have a mind, other animals we consider to have minds don't understand
> story's.
Story-understanding is just t1. But animals are not on the (human)
T-scale at all.
> Expecting a human to internalise a computer program to perform any complex
> task without coming to some internal understanding of the task, such as
> learning Chinese.
What a human mind eventually comes to LEARN from experience in
implementing a T2 program is completely irrelevant to the question of
whether just implementing the program is all it takes to understand
Chinese.
> Expecting a simulation in a computer (with current hardware) or human to be
> able to run at a speed such that its responses are still valid to the
> environment its based inside.
Searle's timing handicaps are real, but not relevant to the point that
was being made (about T2, which is not a real-time test). It's like
saying that it's not possible in principle to get anywhere in the
cosmos by rocket, because in practise there's never enough fuel....
Stevan
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT