Re: Searle: Is the Brain a Digital Computer?

From: Yusuf Larry (kly198@ecs.soton.ac.uk)
Date: Mon Apr 02 2001 - 22:39:56 BST


> SEARLE:
> I think 2 can be decisively answered in the negative. Since programs are
> defined purely formally or syntactically and since minds have an intrinsic
> mental content, it follows immediately that the program by itself cannot
> constitute the mind. The formal syntax of the program does not by itself
> guarantee the presence of mental contents.

Yusuf:
What if the program stores (mental) states? Surely the syntax will have no
meaning but what it implements will (mindful computation).

> SEARLE:
> The argument rests on the simple logical truth that syntax is not the same
> as, nor is it by itself sufficient for, semantics. So the answer to the
second
> question is obviously "No".

Yusuf:
If the symbols (the syntax) were grounded, such that a reference can be made
from symbol to item, then surely the reference (which will be part of the
syntax) has meaning. at this point, doesn't the syntax becomes sufficient
for semantics and does this mean that the answer to the question "Is the
mind a computer program" will be "Yes".

> SEARLE:
> So our question is not, "Is the mind a program?" The answer to that is,
"No".
> Nor is it, "Can the brain be simulated?" The answer to that is, "Yes".

Yusuf:
The answer might be yes but how much does it help us at the end of the day.
Just like a plane simulation can't fly, a simulation of the brain is just
that a simulation, i.e an imitation of the behaviour of some exisiting or
intended system, or some aspect of that behaviour. Hence, some prediction of
how the brain would function under certain circumstances or more precisely a
model of part of the brain's functionality. We can build a simulation of
almost anything to predict its actions or model parts of it without the
simulation being based on or being a replica of what we are modelling.

In essence, saying the brain can be simulated is a bit of a hazy statement
until the parameters are specified.

> SEARLE:
> The question is, "Is the brain a digital computer?" And for purposes of
this
> discussion I am taking that question as equivalent to: "Are brain
processes
> computational?"

Yusuf:
In essense, if brain processes are computational, then like computational
processes in a digital computer, the brain becomes the digital computer that
manages the effective execution of the required processes.

> SEARLE:
> Granted that there is more to the mind than the syntactical operations of
the
> digital computer; nonetheless, it might be the case that mental states are
at
> least computational states and mental processes are computational
processes
> operating over the formal structure of these mental states.

Yusuf:
A very valid argument that I agree with. However, this confuses me because
Searle in his Chinese Room Argument showed that cognition cannot be all
computation and here he seems to be saying there's no reason why it cannot
be computation.

> SEARLE:
> It is clear that at least some human mental abilities are algorithmic...
It is
> furthermore a consequence of the Church - Turing thesis and Turing's
> theorem that anything a human can do algorithmically can be done on a
> Universal Turing Machine... Now it seems reasonable to suppose there might
> also be a whole lot of mental processes going on in my brain
nonconsciously
> which are also computational. And if so, we could find out how the brain
> works by simulating these very processes on a digital computer.

Yusuf:
Yes, but since only some of our mental abilitites are algorithmic and only
some to many of the brain processes might be many, this goes to show us tha
the brain is not a digital computer, at least not wholly. This is because a
digital computer is basically only capable of performing computation.

> SEARLE:
> We thus have a well defined research program. We try to discover the
> programs being implemented in the brain by programming computers to
> implement the same programs. We do this in turn by getting the mechanical
> computer to match the performance of the human computer (i.e. to pass the
> Turing Test) and then getting the psychologists to look for evidence that
the
> internal processes are the same in the two types of computer.

Yusuf:
Following Turing's thesis of Indistinguishability, we would therefore be
able to deduce/assume that the human brain is a digital computer.

> SEARLE:
> To find out if an object is really a digital computer, it turns out that
we do not
> actually have to look for 0's and 1's, etc.; rather we just have to look
for
> something that we could treat as or count as or could be used to function
as
> a 0's and 1's. Furthermore, to make the matter more puzzling, it turns out
> that this machine could be made out of just about anything.

Yusuf:
A digital computer is one that operates on discrete quantities. All
computation is done within a finite number system and with limited
precision, associated with the number of digits in the discrete numbers. The
numerical information is most often represented by the use of two-state
electrical phenomena (on/off, current/no current e.t.c.) to indicate whether
the value of a binary variable is a "zero" or a "one". The
developmemnt/creation/existence of a difital computer is implementation
independent.

> SEARLE:
> Computationally speaking, on this view, you can make a "brain" that
> functions just like yours and mine out of cats and mice and cheese or
levers
> or water pipes or pigeons or anything else provided the two systems are,
in
> Block's sense, "computationally equivalent" . You would just need an awful
> lot of cats, or pigeons or waterpipes, or whatever it might be.

Yusuf:
In essence computation is implementation-independent specifically
hardware-independent. Does this therefore mean that a digital computer
cannot be identified by its physical properties?

> SEARLE:
> The multiple realizability of computationally equivalent processes in
different
> physical media was not just a sign that the processes were abstract, but
that
> they were not intrinsic to the system at all. They depended on an
> interpretation from outside. We were looking for some facts of the matter
> which would make brain processes computational;

Yusuf:
The implementation-independence of computation, informs us that computation
is not just abstract but not fundamental to the system at all. Rather
computation depends on some external interpreter.

If this is so and what is/will be the external interpreter in the case of
the human brain?

> SEARLE:
> There is no way you could discover that something is intrinsically a
digital
> computer because the characterization of it as a digital computer is
always
> relative to an observer who assigns a syntactical interpretation to the
purely
> physical features of the system.

Yusuf:
When we get away from the theorists view where everything is a digital
computer, we come back to reality where there are things that are digital
computers and others that aren't. This message must always be remembered.
Searle confuses this issue by stating later on that:
> SEARLE:
> Analogously, we might discover in nature objects which had the same
> sort of shape as chairs and which could therefore be used as chairs;
> but we could not discover objects in nature which were functioning as
> chairs, except relative to some agents who regarded them or used
> them as chairs.

Yusuf:
At the end of the day, chairs are things we sit on, so we do not need to tag
an item "Chair" before in becomes a chair in the same way that a digital
computer is or isn't.

> SEARLE:
> Since the computational operations of the computer can be analyzed
> into progressively simpler units, until eventually we reach simple
> flip-flop, "yes-no", "1-0" patterns, it seems that the higher-level
> homunculi can be discharged with progressively stupider homunculi,
> until finally we reach the bottom level of a simple flip-flop that
> involves no real homunculus at all. The idea, in short, is that
> recursive decomposition will eliminate the homunculi.

Yusuf:
Not really because high level computation (e.g multiplication as given in
the text) is purely syntactic and hence not relative to the physics. The
homunculus fallacy can therefore not be eliminated because at the lower
levels computation still doesn't become intrinsic.

> SEARLE:
> So far, we seem to have arrived at a problem. Syntax is not part of
physics.
> This has the consequence that if computation is defined syntactically then
> nothing is intrinsically a digital computer solely in virtue of its
physical
> properties.

Yusuf:
Of course a digital computer isn't one solely because of its physical
properties. It is a causal system that produces results based on some input
or cause. These results are then interpretable (at keast semantically) by an
external entity, US.

> SEARLE:
> The mechanisms by which brain processes produce cognition are supposed to
> be computational, and by specifying the programs we will have specified
the
> causes of cognition.
Yusuf:
Basically, if computation is the way by which the brain processes produce
cognition then by defining the programs for the computation we would be
defining the causes of cognition.

> SEARLE:
> One beauty of this research program, often remarked, is that we do not
need
> to know the details of brain functioning in order to explain cognition.
Brain
> processes provide only the hardware implementation of the cognitive
> programs, but the program level is where the real cognitive explanations
are
> given.

Yusuf:
I totally disagree with this statement. We cannot be sure if the details of
the brain functioning might have an indirect influence on cognition. To
state that " we do not need to know the details of brain functioning in
order to explain cognition" is like saying we do not need to know the
weather in Alabama to predict the weather in Southampton.

> SEARLE:
> The thesis is that there are a whole lot of symbols being manipulated in
the
> brain, 0's and 1's flashing through the brain at lightning speed and
invisible
> not only to the naked eye but even to the most powerful electron
microscope,
> and it is these which cause cognition.

Yusuf:
But the brain is not a binary system that needs to process 0's and 1's,
neither is it a discrete system like a digital computer. The brain performs
a lot of analogue and parallel processing so how then did he come up with
this argument of 0's and 1's in the brain.

We should be able to make the distincion between the commmerical computer
which is binary and our brains. To start to think we are binary would lead
nowhere.

So if these 0's and 1's don't have a physical presence, then computation in
terms of 0's and 1's cannot exist in the brain. I am not saying that the
brain doesn't perform computation but that it does it in a different way to
that of the computer. Computation is after all implementation independent
and the symbols just "squiggles" and "squoggles".

> SEARLE:
> The implemented program has no causal powers other than those of the
> implementing medium because the program has no real existence, no
> ontology, beyond that of the implementing medium. Physically speaking
> there is no such thing as a separate "program level".

Yusuf:
I disagree, if i had a robot, the code implementation that makes the limbs
move, aid speech etc are all causal. So this statement is exactly true,
however a PC just running code isn't really causal apart from to the
hardware.

> SEARLE:
> The human computer is consciously following rules, and this fact explains
> his behavior, but the mechanical computer is not literally following any
rules
> at all. It is designed to behave exactly as if it were following rules,
and so for
> practical, commercial purposes it does not matter. Now Cognitivism tells
us
> that the brain functions like the commercial computer and this causes
> cognition. But without a homunculus, both commercial computer and brain
> have only patterns and the patterns have no causal powers in addition to
> those of the implementing media. So it seems there is no way Cognitivism
> could give a causal account of cognition.

Yusuf:
As i argued in my Turing skywriting about developing machines that can
think, there are those operations we as people perform that are rule based
but also implicit to us such that they give a result but we cannot
explicitly describe. In a case like this, we are rule following and that
which we deliver is causal based on the result.

> SEARLE:
> We can say that when I hit this key I got such and such results because
the
> machine is implementing the vi program and not the emacs program; and
> this looks like an ordinary causal explanation. So the puzzle is, how do
we
> reconcile the fact that syntax, as such, has no causal powers with the
fact
> that we do give causal explanations that appeal to programs?

Yusuf:
We give causal explanations that appeal to programs because we expect
certain results from the program based on the syntactic rules within. Hence
from the rules in the algorithm we know that IF we do A, Z happens or the
result is Z and IF we do B, Y happens. It will not do to say that because
the syntax might not be causal we can not describe it in terms of its
results. This will be like saying we cannot define a problem in terms of
another (probably wouldn't be a bad thing, we all know lots of writers do
this, quite annoyingly!!!).

> SEARLE:
> In the brain computer there is no conscious intentional implementation of
the
> algorithm as there is in the human computer, but there can't be any
> nonconscious implementation as there is in the mechanical computer either,
> because that requires an outside homunculus to attach a computational
> interpretation to the physical events.

Yusuf:
At what stage, was the differentiation between the brain computer and human
computer made, because i must have missed it. And based on this, what
exactly is this statement getting at?

> SEARLE:
> In the case of the brain, none of the relevant neurobiological processes
are
> observer relative (though of course, like anything they can be described
from
> an observer relative point of view) and the specificity of the
neurophysiology
> matters desperately.

Yusuf:
So, if none of the neurobiological processes are observer relative, then the
observer relativity should not be a problem in computation. So what is it? I
suggest (as I did earlier) Symbol Grounded. Symbol Grounding when used in a
robot, reduces the problems with semantics and causal code.

Yusuf:
Searle sets out to answer the question "Is the brain a digital computer?" in
this paper, but never explicitly gives us an answer. He suggests that this
question is ill-defined but in defining it another way takes us away from
the answer.
Searle discusses different difficulties that arise in trying to answer the
said question but in doing this negates to consider the view (supported by
himself) that what we're made of is fundamental to our intelligence. In
which case using a digital computer or a machine short of T5 to model the
human brain would be fool-hardy.
On the other hand, some good and mind expanding points were raise even if
some of them were irrelevant and other hazy.



This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST