Re: Searle: Is the Brain a Digital Computer?

From: McIntosh Chris (
Date: Mon Mar 05 2001 - 00:49:24 GMT

First a definition of digital computer: A device capable of solving
processes by processing information in discrete, binary form. Most
man-made computers are of this kind.

In contrast, analog computers operate on directly measurable
(amounts of) quantities on a continuous scale, such as electrical
signals, with applications in areas such as simulation and robotics.

Because the digital computer is a sequential machine, it can
perform only one calculation at a time, so cannot solve even
simple problems of multiple variables all at once. It must break
them up into segments, store the results, and recall them as needed.
The digital computer is good at solving algebraic equations and even
better at manipulating numbers, where it can offer high-speed
precision and data storage.

Before Searle answers his main question, he looks at two others.
Firstly, he asks if the brain can be simulated by a digital computer.
He believes that since a precise description can be given to brain,

> according to Church's thesis, anything that can be given a precise
> enough characterisation as a set of steps can be simulated on a
> digital computer.
.. he concludes that brain simulation is clearly possible.
However, I'm not sure whether it would be possible to give the brain
the complete characterisation necessary for this simulation.

Searle also asks "is the mind a computer program". If it is, then
presumably a digital computer could be found to implement the
program, allowing the possibility that the mind could also be a
digital computer. But clearly the mind is not such a program:

> Since programs are defined purely formally or syntactically and
> since minds have an intrinsic mental content, it follows immediately
> that the program by itself cannot constitute the mind.

So semantics requires more than just syntax. However, Searle still
asks, 'is the brain a digital computer', because although the brain
must be more than just a digital computer, it is worth investigating
whether such a computer could still be a crucial component. As the
topic of Searle's paper is cognitivism, he gives a definition along
with strong and weak AI.

> I call the view that all there is to having a mind is having a
> program, Strong AI, the view that brain processes (and mental
> processes) can be simulated computationally , Weak AI. and the view
> that the brain is a digital computer, Cognitivism.

Searle considers Turing's 1950 paper, in which the Turing test is
introduced, as the foundation for the cognitivist view. The Universal
Turing Machine is one that can implement any algorithm whatever,
and there was excitement over whether the brain could be such a
machine. Of course it couldn't be, as it lacks an infinite memory, so
the infinite tape for writing 1's and 0's as necessary cannot be
simulated. In fact, for problems like arithmetic it seems the brain has
very few tape slots it can use.

> It is clear that at least some human mental abilities are algorithmic.
> For example, I can consciously do long division by going through
> the steps of an algorithm for solving long division problems. It is
> furthermore a consequence of the Church - Turing thesis and Turing's
> theorem that anything a human can do algorithmically can be done on
> a Universal Turing Machine. I can implement, for example, the very
> same algorithm that I use for long division on a digital computer.

Which mental abilities might not be algorithmic? How could
non-algorithmic algorithms be represented to allow brain simulation?

For a definition of computation Searle resorts to Turing's original
paper, imagining a machine that can write 1's and 0's, and shift to
the left or right, according to a program of instructions. They do
not require a physical presence, he explains:
> If you open up your home computer you are most unlikely to find any
> 0's and 1's or even a tape. But this does not really matter for the
> definition…we just have to look for something that we could treat as
> or count as or could be used to function as a 0's and 1's.
> Computationally speaking, on this view, you can make a "brain" that
> functions just like yours and mine out of cats and mice and cheese or
> levers or water pipes or pigeons or anything else provided the two
> systems are… "computationally equivalent". You would just need an
> awful lot of cats, or pigeons or waterpipes, or whatever it might be.

It is rather doubtful that a collection of pipes or pigeons could
reproduce consciousness. Syntactical similarity does not imply that
the implementation or its physical effects will also be similar.

> Just as carburettors can be made of brass or steel, so computers can
> be made of an indefinite range of hardware materials.
> But there is a difference: The classes of carburettors and
> thermostats are defined in terms of the production of certain
> physical effects.
> That is why, for example, nobody says you can make carburettors
> out of pigeons. But the class of computers is defined syntactically
> in terms of the assignment of 0's and 1's. The multiple realizability
> is a consequence not of the fact that the same physical effect can be
> achieved in different physical substances, but that the relevant
> properties are purely syntactical. The physics is irrelevant except in
> so far as it admits of the assignments of 0's and 1's and of state
> transitions between them.

Physical things could not be guaranteed to function as intended when
made from arbitrary components, but computers can, provided there
are enough components. However, whether something is a computer is
determined by its syntactical properties, permitting construction
from any physical components.
Since any object could be described syntactically in terms of 0's and
1's, everything could be described as a digital computer.
Furthermore, since syntax is not intrinsic to physics, a computational
interpretation must be ascribed to and can never be discovered in the
physical world. Searle explains why this is a problem:

> Well, we wanted to know how the brain works, specifically how it
> produces mental phenomena. And it would not answer that question to
> be told that the brain is a digital computer in the sense in which
> stomach , liver, heart, solar system , and the state of Kansas are all
> digital computers … We wanted to know if there was not some sense
> in which brains were intrinsically digital computers in a way that
> green leaves intrinsically perform photosynthesis or hearts
> intrinsically pump blood. It is not a matter of us arbitrarily or
> "conventionally" assigning the word "pump" to hearts or
> "photosynthesis" to leaves. There is an actual fact of the matter.
> And what we were asking is, "Is there in that way a fact of the
> matter about brains that would make them digital computers?"
> It does not answer that question to be told, yes, brains are digital
> computers because everything is a digital computer.

Searle's Chinese Room argument demonstrated that semantics is not
intrinsic to syntax, by showing that manipulation of symbols
according to the syntax can be achieved without understanding. His
new point that syntax is not intrinsic to physics, as tokens are
assigned not discovered, and he elaborates as follows:

> to say that something is functioning as a computational process is
> to say something more than that a pattern of physical events is
> occuring. It requires the assignment of a computational
> interpretation by some agent. Analogously, we might discover in
> nature objects which had the same sort of shape as chairs and which
> could therefore be used as chairs; but we could not discover objects
> in nature which were functioning as chairs, except relative to some
> agents who regarded them or used them as chairs.

Searle now considers another difficulty with cognitivism - the
homunculus fallacy, which postulates a 'little man' in the mind to
help explain mental abilities. However this merely shifts the thing
to be explained from the whole to a sub-part. It is no use
explaining vision by describing a visual system that implements some
algorithms and outputs a 3d description of the world - who would
then be reading this output?
Hardware alone couldn't comprise a digital computer, as it would
merely be an electronic circuit consisting of certain states and
patterns, but the homunculus fallacy is used in the attempt to make
syntax intrinsic to physics. Dennett and others have tried to
discharge the homunculus as follows:

> Since the computational operations of the computer can be analyzed
> into progressively simpler units, until eventually we reach simple
> flip-flop, "yes-no", "1-0" patterns, it seems that the higher-level
> homunculi can be discharged with progressively stupider homunculi,
> until finally we reach the bottom level of a simple flip-flop that
> involves no real homunculus at all. The idea, in short, is that
> recursive decomposition will eliminate the homunculi.

Cognitivists will admit that higher levels of computation, such as
multiplication, are purely syntactical, and therefore they are
observer relative and not intrinsic to the physics. But at no lower
levels does computation ever become suddenly intrinsic, so the
homunculus fallacy cannot be escaped so easily.
Searle's next difficulty with cognitivism is that syntax has no
causal powers. Just as DNA causes particular inherited traits and
germs cause disease, so cognitivists would like to argue that
programs underlying brain processes cause cognition. But,

> The implemented program has no causal powers other than those of the
> implementing medium because the program has no real existence, no
> ontology, beyond that of the implementing medium. Physically speaking
> there is no such thing as a separate "program level".

> The human computer is consciously following rules, and this fact
> explains his behavior, but the mechanical computer is not literally
> following any rules at all. It is designed to behave exactly as if it
> were following rules, and so for practical, commercial purposes it
> does not matter. Now Cognitivism tells us that the brain functions
> like the commercial computer and this causes cognition. But
> without a homunculus, both commercial computer and brain have only
> patterns and the patterns have no causal powers in addition to those
> of the implementing media. So it seems there is no way Cognitivism
> could give a causal account of cognition.
A mechanical computer could not literally be following rules since it
doesn't know what rules are. It behaves in a certain way given its
input, and this is usually interpreted as rule-following.
The computer needs a homunculus, the user, in addition to its
implementing hardware, to perform meaningful computation. For the
brain to operate as a digital computer it therefore would also require
a homunculus. However, Searle sees a puzzle.

> we can say that when I hit this key I got such and such results
> because the machine is implementing the vi program and not the
> emacs program; and this looks like an ordinary causal explanation.
> So the puzzle is, how do we reconcile the fact that syntax, as such,
> has no causal powers with the fact that we do give causal
> explanations that appeal to programs?

To find the answer he removes the homunculus from the system:

> you are left only with a pattern of events to which someone from
> outside could attach a computational interpretation. Now the only
> sense in which the specification of the pattern by itself provides a
> causal explanation is that if you know that a certain pattern exists
> in a system you know that some cause or other is responsible for
> the pattern. So you can, for example, predict later stages from
> earlier stages.

So when a machine is implementing a program, it is implementing the
intentions of the homunculus. Causal explanations can now be given,
as the programmer has determined how the program will operate. The
program should not be interpreted as determining its own behaviour.

> We try to discover the programs being implemented in the brain by
> programming computers to implement the same programs. We do this in
> turn by getting the mechanical computer to match the performance of
> the human computer (i.e. to pass the Turing Test) and then getting the
> psychologists to look for evidence that the internal processes are the
> same in the two types of computer… to test the hypothesis we look for
> indirect psychological evidence, such as reaction times.

This is the research project that seeks to understand brain processes
by analysing the performance of computers over similar processes.
Searle disapproves though. If we actually knew the processes, the
explanation given via computers could be ignored. Also, the
explanation would not be acceptable for other sorts of systems that
we could simulate computationally.
Successful simulation of the weather would not give us a perfect
understanding of the underlying physical processes.

> you cannot explain a physical system such as a typewriter or a brain
> by identifying a pattern which it shares with its computational
> simulation, because the existence of the pattern does not explain how
> the system actually works as a physical system.

> But these conditions cannot be met by the brute, blind, nonconscious
> neurophysiological operations of the brain. In the brain computer
> there is no conscious intentional implementation of the algorithm as
> there is in the human computer, but there can't be any nonconscious
> implementation as there is in the mechanical computer either,
> because that requires an outside homunculus to attach a
> computational interpretation to the physical events. The most we
> could find in the brain is a pattern of events which is formally
> similar to the implemented program in the mechanical computer, but
> that pattern, as such, has no causal powers to call its own and hence
> explains nothing.

Searle finishes his argument against cognitivism by attacking the
notion of the brain as an information processing system, to prevent any
side-stepping of his previous arguments.
A computational system is an information processing system, and
some cognitivists will argue that the brain is intrinsically an
information processing system, so that a computer simulation can
actually duplicate the functional properties of the brain. In contrast,
they will say, most physical systems do not process information, so a
computation system will merely be a model.
Computer hardware has no intrinsic syntax or semantics. An
outside agent encodes input information and may interpret both the
electrical processing stages and the physical output.

> But now contrast that with the brain. In the case of the brain, none
> of the relevant neurobiological processes are observer relative
> (though of course, like anything they can be described from an
> observer relative point of view) and the specificity of the
> neurophysiology matters desperately.

The processes of the brain are real physical processes which depend
on the specific characteristics of that brain. The same physical
processes will take place irrespective of the observer.

> We do not in general suppose that computational simulations of brain
> processes give us any explanations in place of or in addition to
> neurobiological accounts of how the brain actually works.

In general, the most accurate simulations are achieved by knowing how
something works in the first place. However, what if we couldn't
discover a complete neurobiological account of the brain. Simulations
may be then be the only way to draw new inferences.

> Suppose I see a car coming toward me. A standard computational
> model of vision will take in information about the visual array on my
> retina and eventually print out the sentence, "There is a car coming
> toward me". But that is not what happens in the actual biology. In
> the biology a concrete and specific series of electro-chemical
> reactions are set up by the assault of the photons on the photo
> receptor cells of my retina, and this entire process eventually
> results in a concrete visual experience.
> The biological reality is not that of a bunch of words or symbols
> being produced by the visual system, rather it is a matter of a
> concrete specific conscious visual event.
> In short, the sense of information processing that is used in
> cognitive science, is at much too high a level of abstraction to
> capture the concrete biological reality of intrinsic intentionality.
> The "information" in the brain is always specific to some modality
> or other. It is specific to thought, or vision, or hearing, or touch,
> for example.

Searle won't make any exceptions for the brain. Just as for any
other physical system, a simulation is not a duplication. Reactions
in the brain do not reach the level of information processing. For
example, as the atmosphere does not process climate information
to determine when rising moisture should condense, so the brain does
not process visual information when deciding whether it needs to react
to an object moving quickly into the field of view. These processes are
just physical or biological events.

Searle distinguishes these events and processes from formal symbol
manipulation, which he believes will account for the operation of a
true information processing candidate, or computational system.
Intuitively, brain processes that seem to be non-computational, such
as emotions perhaps, suggest that the brain could not be just an
information processing system.
But aren't there still conscious algorithmic brain processes that could
be interpreted as information processing? What if one area of the brain
directs another specialist area to process information, giving it the
necessary symbolic inputs and then making use of the computed results?
It is hard to accept Searle's suggestion that the brain does no

> you could not discover that the brain or anything else was
> intrinsically a digital computer, although you could assign a
> computational interpretation to it as you could to anything else.
> The point is not that the claim "The brain is a digital computer"
> is false. Rather it does not get up to the level of falsehood. It
> does not have a clear sense.

Searle concludes that his question has been ill-defined. It is not
correct to ask whether something is intrinsically an instance of a
class that has been assigned and not discovered. It is worth
noting, however, that the brain excels at parallel tasks such as
face-recognition, and is weaker at sequential tasks, especially if
they involve numbers. So in any event it seems we already have an
intuitive basis for denying that the brain could be a digital computer.

This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:20 BST