> SEARLE:
> According to weak AI, the principal value of the computer in the study
> of the mind is that it gives us a very powerful tool. For example, it
> enables us to formulate and test hypotheses in a more rigorous and
> precise fashion. But according to strong AI, the computer is not merely
> a tool in the study of the mind; rather, the appropriately programmed
> computer really is a mind, in the sense that computers given the right
> programs can be literally said to understand and have other cognitive
> states. In strong AI, because the programmed computer has cognitive
> states, the programs are not mere tools that enable us to test
> psychological explanations; rather, the programs are themselves the
> explanations.
>
> Blakemore:
> These definitions are easy enough to understand, but the concepts are
> quite mind blowing. I totally accept weak AI, using our current
> understanding of the mind to create useful tools. These smart tools seem
> to have intelligence beyond a normal program, but only intelligence
> entered by a programmer. There is no understanding from the computer or
> the program.
Quite. I would like to stress the point that these tools SEEM to have
intelligence, rather than actually having intelligence.
> Strong AI actually states that all programs with cognitive states have a
> mind. Whilst I can accept a super smart machine which is able to pass the
> Turing Test (say, up to T3) might have a mind, as it would be
> indistinguishable from our own minds. To say a toaster, mobile phone
> or a digital watch has a mind is absurd.
I don't think that this is quite the point. Can you say that a toaster,
mobile phone or digital watch have cognitive states?
> SEARLE:
> A man went into a restaurant and ordered a hamburger. When the hamburger
> arrived it was burned to a crisp, and the man stormed out of the
> restaurant angrily, without paying for the hamburger or leaving a tip."
> Now, if you are asked -Did the man eat the hamburger?" you will
> presumably answer, ' No, he did not.' Similarly, if you are given the
> following story:
> A man went into a restaurant and ordered a hamburger; when the hamburger
> came he was very pleased with it; and as he left the restaurant he gave
> the waitress a large tip before paying his bill," and you are asked the
> question, -Did the man eat the hamburger?,-' you will presumably answer,
> -Yes, he ate the hamburger."
> Partisans of strong AI claim that in this question and answer sequence
> the machine is not only simulating a human ability but also
> 1. that the machine can literally be said to understand the story and
> provide the answers to questions, and
> 2. that what the machine and its program do explains the human ability
> to understand the story and answer questions about it.
>
> Blakemore:
> The rest of Searle's paper tries to show these claims are false, if the
> machine only uses pure symbol manipulation.
So if the machine gives the same answers as humans would to the questions,
then it can be said to be simulating human ability. Agreed. While it is
not a trivial progression from question to answer in cases like this, I
disagree that there is any understanding going on. Do Partisans of Strong
AI claim that the machine can understand what is involved with a man going
into a restaurant and eating (or not) a hamburger? Can the machine
visualise this man storming out of the restaurant?
To the point of symbol manipulation. There is no argument that the meaning
of the question is entirely captured by the symbols that represent it. So
a machine could potentially extract as much information as humans could.
What is there, over and above symbol manipulation, that humans do?
> SEARLE:
> As regards the second claim, that the program explains human
> understanding, we can see that the computer and its program do not
> provide sufficient conditions of understanding since the computer and
> the program are functioning, and there is no understanding. But does it
> even provide a necessary condition or a significant contribution to
> understanding? One of the claims made by the supporters of strong AI is
> that when I understand a story in English, what I am doing is exactly
> the same -- or perhaps more of the same -- as what I was doing in
> manipulating the Chinese symbols. It is simply more formal symbol
> manipulation that distinguishes the case in English, where I do
> understand, from the case in Chinese, where I don't. I have not
> demonstrated that this claim is false, but it would certainly appear an
> incredible claim in the example. Such plausibility as the claim has
> derives from the supposition that we can construct a program that will
> have the same inputs and outputs as native speakers, and in addition we
> assume that speakers have some level of description where they are also
> instantiations of a program.
>
> Blakemore:
> I agree with Searle again. It does not seem to me that we only manipulate
> symbols for understanding. We infer things from the text, considering our
> own opinions, beliefs, feelings and knowledge when answering questions.
This can be seen to reply to my question above, but what, for example, is
inference if it is not manipulation of symbols in some way? We consider
our opinions, which are going to be based on the words (symbols) in the
sentence (question). Could our opinions not be seen as functions of the
symbols in question, and further that these functions are implemented with
symbol manipulation?
> SEARLE:
> by the example [is that] the computer program is simply irrelevant to my
> understanding of the story."
>
> Blakemore:
> Searle is saying that the way in which we read information and process it
> could be done by a computer program, much like reading in a file from a
> computer disc. The syntax is checked at this stage in the program and
> possibly some level of the semantics. But the understanding comes from a
> much higher abstract level in the brain.
While I have suggested that all of the information that is extracted from
the question, and the information added from opinions and beliefs etc., is
purely symbol manipulation, I agree that there would still be no
understanding.
> SEARLE:
> I. The systems reply (Berkeley). "While it is true that the individual
> person who is locked in the room does not understand the story, the fact
> is that he is merely part of a whole system, and the system does
> understand the story. The person has a large ledger in front of him in
> which are written the rules, he has a lot of scratch paper and pencils
> for doing calculations, he has 'data banks' of sets of Chinese symbols.
> Now, understanding is not being ascribed to the mere individual; rather
> it is being ascribed to this whole system of which he is a part."
> My response to the systems theory is quite simple: let the individual
> internalize all of these elements of the system. He memorizes the rules
> in the ledger and the data banks of Chinese symbols, and he does all the
> calculations in his head. The individual then incorporates the entire
> system. There isn't anything at all to the system that he does not
> encompass. We can even get rid of the room and suppose he works
> outdoors. All the same, he understands nothing of the Chinese, and a
> fortiori neither does the system, because there isn't anything in the
> system that isn't in him. If he doesn't understand, then there is no way
> the system could understand because the system is just a part of him.
>
> Blakemore:
> Some people would reply that the information (program) would be too big to
> memorize, or the fact that they might make errors. But, this is not the
> point of the argument. The point is that it is possible, no matter how
> big the program, to encompass it into one person, thus becoming the whole
> system. We can then ask that person (In their native language) if they
> understand what they are doing. The other response is that there are two
> minds. The human mind which executes the program, and another which is
> created as a result of the program and the computer. Is there then two
> minds? I don't think so, but I cannot prove it. The computer (the human)
> is aware of his own mind, but does he think there is another mind floating
> around (in his head, since the system is all in his head) understanding
> the program he may not even understand?
If someone was to reply that the program would be too big to memorize,
then they cannot believe that what the program achieves is exactly what
humans do, as we all have memorized 'the English version of the program'.
Or would the reply be that the program would be too large to memorize,
along side the one that we were already running?
As to whether there would be two minds, I too think not. It is impossible
for me to accept that there might be another mind being implemented.
> McCARTHY:
> Machines as simple as thermostats can be said to have beliefs, and
> having beliefs seems to be a characteristic of most machines capable of
> problem solving performance.
> SEARLE:
> Anyone who thinks strong AI has a chance as a theory of the mind ought
> to ponder the implications of that remark. We are asked to accept it as
> a discovery of strong AI that the hunk of metal on the wall that we use
> to regulate the temperature has beliefs in exactly the same sense that
> we, our spouses, and our children have beliefs, and furthermore that
> "most" of the other machines in the room -- telephone, tape recorder,
> adding machine, electric light switch, -- also have beliefs in this
> literal sense.
>
> Blakemore:
> I agree with Searle that this is a silly remark. We should only consider
> a machine to have intelligence when it passes at least T2 (perhaps it
> should be higher).
What beliefs could a thermostat have? "I believe that the temperature is
20 degrees celcius"? To push a point, a thermostat could be said to have
knowledge, taking input from a temperature sensor and the like, but not
belief.
> SEARLE:
> I thought the whole idea of strong AI is that we don't need to know how
> the brain works to know how the mind works. The basic hypothesis, or so
> I had supposed, was that there is a level of mental operations
> consisting of computational processes over formal elements that
> constitute the essence of the mental and can be realized in all sorts of
> different brain processes, in the same way that any computer program can
> be realized in different computer hardwares: on the assumptions of
> strong AI, the mind is to the brain as the program is to the hardware,
> and thus we can understand the mind without doing neurophysiology. If we
> had to know how the brain worked to do AI, we wouldn't bother with AI.
>
> Blakemore:
> I again agree with Searle's aside point, but I think AI would still exist.
> The fact computer programs can be ported to other machines shows the
> independence of the program from the computer. In the same light, the
> mind is separate from the brain, but is it? We don't know this.
It's my view that the mind is not separate from the brain. The operations
that the brain does, are implicit in the neurons and how they are
arranged. That is, the algorithms are not implemented in 'software'
running on the brain, but are implicit in the 'hardware'. The mind is
something that the brain implements.
> SEARLE:
> "Could a machine think?"
> The answer is, obviously, yes. We are precisely such machines.
>
> Blakemore:
> Only if you define humans as machines. Although I don't know for
> definite, I do not think I am running a computer program as I have
> physical state (action and consequence), understanding and thought. I
> wouldn't consider myself as a machine.
Why is the answer to "Could a machine think?" obviously yes? I would argue
that it is no, and I certainly wouldn't say that it is obvious. I agree
with Blakemore's point.
> SEARLE:
> Assuming it is possible to produce artificially a machine with a nervous
> system, neurons with axons and dendrites, and all the rest of it,
> sufficiently like ours, again the answer to the question seems to be
> obviously, yes. If you can exactly duplicate the causes, you could
> duplicate the effects. And indeed it might be possible to produce
> consciousness, intentionality, and all the rest of it using some other
> sorts of chemical principles than those that human beings use.
>
> Blakemore:
> I totally agree. Completely and exactly (physically) copy a human brain
> and it should behave like us. However, using "other sorts of chemical
> principles" doesn't seem likely. The other chemical process must produce
> EXACTLY the same effects. I don't think you get exactly the same results
> using different material than the original organic matter.
If you produce artificially a machine (although I'm about to aruge that it
wouldn't be a machine) that was sufficiently like (exactly the same?) as
humans, then you're not making a machine, you are making a human. Again, I
agree with Blakemore's point - if you change the materials, you are not
going to get the same effects.
> Blakemore:
> [...] humans do information processing and find it natural to think of
> machines doing the same thing. However, all machines do is maniplulate
> symbols.
The same information processing could be carried out by a machine
manipulating symbols, that is, produce the same output from the same
input, but this is not sufficient for having mind.
> Blakemore:
> Thirdly, strong AI only makes sense given the dualistic assumption that,
> where the mind is concerned, the brain doesn't matter. In strong AI (and
> in functionalism, as well) what matters are programs, and programs are
> independent of their realization in machines.
I don't agree that where the mind is concerned the brain doesn't matter,
as I have stated before.
> Blakemore:
> I agree with the argument throughout this paper showing that formal
> programs in no way give a machine understanding.
I agree with the argument as a whole, but I disagree on some of the
elements to the argument, as I have shown.
Steve
sjlb197@soton.ac.uk
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:36:27 GMT