Chapter 2: Explanation & Simulation

From: Harnad, Stevan (
Date: Sat May 10 1997 - 21:13:56 BST

Chapter 2: Explanation and Simulation

To explain the mind you have to explain what the mind can DO.
What can the mind do? It can see, hear, touch, feel, move,
speak, understand, read, write, solve problems, explain, think.

Computation can be used in two different ways in trying to explain the

(1) Perhaps cognition IS some kind of computation:

This approach is called "computationalism" (or "Strong AI" [AI is
Artificial Intelligence]).

(2) Perhaps we can SIMULATE cognition using computation.

This approach (sometimes called "Weak AI") is used by many different
disciplines -- from astronomy to automotive engineering. Computers can
be used to SIMULATE systems, and if the simulation captures the right
features, it can be used to predict as well as to explain.

Computers can be used to simulate planetary movement and to predict
things like the next return of comet Hale-Bopp. They can also be used to
test new car designs without having to build the car.

Computationalism (1) is probably wrong. Read about Searle's Chinese Room
Argument (below) to find out why.

According to computationalism, the mind is a symbol manipulation system.
Mental states are states in a symbol system which consists of symbols
and rules for manipulating them (rather like the grammatical rules of a
language). Propositions (strings of symbols such as "2 + 2 = 4"
or "the cat is on the mat") are examples of the kind of thing that
computationalists think is going on in the brain. They think there is a
"language of thought" -- not a natural language like English or French,
but very much like it.

A Turing Machine is the abstract prototype of a symbol system: it gets
symbols as input, it produces symbols as output; all it can do is read a
symbol write a symbol, move it's input tape forward or stop. It has
a machine "table" inside, which is a set of formal rules indicating what
state the machine should go into when it is in the state it is already
in and it reads a particular symbol on its tape. So the machine simply
moves into different states, depending on its machine table and the data
on its input tape.

A real digital computer is an approximate physical example of a Turing
Machine. (There are details about kinds of Turing Machines that
you need not worry about this year.)

The symbols in a symbol system are arbitrary in relation to what they
can be interpreted as meaning. This is true of the symbols in English
too: the word "apple" does not look like an apple, nor is it causally
connected to an apple in any way (except through the mind of
someone who uses it to mean apples).

An algorithm is a symbol manipulation rule, like a recipe.

Neural nets are systems of interconnected nodes: They have an input
"layer" (for some nets you can think of the input layer as being the
sensory surface such as the retina), an output layer (sometimes to be
thought of as generating speech, or movement) and one or more "hidden"
layers. (If there is only the input and output layer the net is called
a "perceptron". The perceptron is the one that can't solve XOR

There are several different kinds of neural nets: Supervised networks
(e.g. backpropagation) readjust the strengths of their connections
by strengthening the connections that led to a correct response and
weakening the connections that led to a long response. Examples of
unsupervised nets are Hebbian nets that simply strengthen connections
between units that are often active at the same time (and vice versa).

There are "localist" nets, in which a single unit or set of units
stands for something. There are also "distributed" nets in which it is
the pattern of activity distributed across many units that stands for
something or does the work. Neither kind of net is very brainlike as
yet, so nets should be thought of as theories of how the mind might
work, rather than as brain mechanisms -- at least for the time being.

Supervised nets have been used to simulate human learning: A backprop net
was trained by Rumelhart and McClelland to receive present tense English
verbs as input and to produce the past tense form as output (e.g.,
start --> started, or give --> gave). This net learned the irregular
verbs (give/gave) by memorizing them, but it made mistakes with regular
verbs (start/started). The net was only a Perceptron, however. Nets
with hidden layers perform better and there is no reason to think that
they could not learn to perform as well as us, given enough time,
examples, and units.

Computationalists vs. Connectionists:

Computationalism (symbol sytems) vied with connectionism (neural nets)
to be the explanation of the mind, but it seems much more sensible to
suppose that both make a contribution for things they are good at:
Symbol Systems are good at logic, language, reasoning, calculation.
Neural nets are good at learning (such as the associative learning
discussed in Chapter 10) and at sensory processing. A hybrid system
would consist of both, each doing what it's good at.

A lot of these topics were covered last year and your best way to find
a kid-sib explanation is to read last year's skywriting on them in in
the three URLs below:

View them by "Subject" or by "Thread": Subject looks better, but it
gives the thread backwards (last reply, then next to last comment, etc.)
whereas Thread gives the skywriting in forward order.

  Subject: The Mind/Body Problem
  Subject: What Is a Machine?
  Subject: What Is Behaviour?
  Subject: Forward vs. Reverse Engineering
  Subject: Mental Imaging
  Subject: Images Vs. Symbols
  Subject: Computation and Cognition
  Subject: Do Computers Have Minds?
  Subject: Granny Objections to Computers' Having Minds
  Subject: Neural Nets
  Subject: Behaviourism
  Subject: Algorithms
  Subject: How to get HOW from WHEN and WHERE
  Subject: Categorisation and Prototypes
  Subject: The Homunculus
  Subject: Pinker's Critique of Neural Nets
  Subject: Symbol Grounding Problem
  Subject: Parameter Setting
  Subject: Does PDP = Nets = Connectionism?
  Subject: Pylyshyn's Critique of Neural Nets
  Subject: What was wrong with Behaviourism?
  Subject: Introspection
  Subject: Consciousness
  Subject: Minsky's Critique of the Perceptron
  Subject: Symbol Systems
  Subject: Supervised Vs. Unsupervised Learning
  Subject: Universal Grammar
  Subject: Backpropagation
  Subject: Retinotopic Maps
  Subject: Neural Nets Vs. Symbol Systems
  Subject: Are We Machines?
  Subject: Searle's Chinese Room Argument
  Subject: Skinner on Language Learning
  Subject: Turing Machine
  Subject: Propositions
  Subject: Cognitivism Vs. Behaviourism
  Subject: The Poverty of the Stimulus
  Subject: Modularity
  Subject: Syntax Vs. Semantics
  Subject: The Other-Minds Problem
  Subject: Analog Processing
  Subject: The Homunculus Problem
  Subject: What is Computation?
  Subject: The Arbitrariness of the Symbol
  Subject: The Exclusive-Or (XOR) Problem
  Subject: Psychology's Observables
  Subject: Categorical Perception
  Subject: What Was Right About Behaviourism?


Subject: The Mind of a Mnemonist
Subject: The Man with the Shattered World
Subject: Miller: Magical Number 7 +/- 2
Subject: Skinner
Subject: Funes the Memorious
Subject: Rosch: Categorisation
Subject: Grandmother objections
Subject: Searle's Chinese Room Argument
Subject: Searle: Minds, Brains & Programs
Subject: Imagery Debate
Subject: Symbol Grounding Problem
Subject: Classical Categorisation
Subject: Computation


And in:

Subject: What Makes Psychology Different?
Subject: Introspection: The Science of Experience
Subject: Anosognosia
Subject: The Man with the Shattered World
Subject: The Mind of a Mnemonist
Subject: Funes the Memorious and memory capacity
Subject: On Luria's "Z" (by JC)
Subject: On Luria's "Z" (by SP)
Subject: On Remembering Everything (by DC)
Subject: On Remembering Everything (by LL)
Subject: On Introspection (by EF)
Subject: Chomsky vs. Skinner on Language
Subject: On Remembering Everything (by LL)
Subject: On Remembering Everything (by DC)
Subject: On Introspection (by AB) (Revised)
Subject: The Poverty of the Stimulus
Subject: Language

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:52 GMT