Re: Pylyshyn: Against Imagery

From: Harnad, Stevan (
Date: Sun Jan 14 1996 - 20:28:42 GMT

> From: "Parker, Chris" <>
> Date: Mon, 27 Nov 1995 08:41:25 +0000
> Pylyshyn wants to maintain that computation is a literal model of
> mental activity and not simply a simulation (p43 Chap2).

That's right. We know that computation can SIMULATE just about anything
-- planetary motion, weather, airplanes, furnaces. It does all of this
through computation, but no one would say that movement, weather, flying
or heating were computation, or that planes, furnaces, etc. are
computers, computing. In the special case of cognition, though, Pylyshyn
wants to suggest that it is LITERALLY computation, not just simulated by


> Both computation and cognition are rule governed processes (p57) yet
> when we talk about similar functions we refer to similar input-output
> pairs (p.50) in a formalist (not necessarily following the same set of
> steps) and universal way that doesn't concern itself with internal
> processes but can be applied to anything (p55)?? I'm confused. So what
> do the rules apply to if it is not the internal process, or do the
> rules only limit options rather than completely define mechanisms.

Computations are symbol manipulations, and computational systems are
symbol systems:

A symbol system is:

(1) a set of arbitrary "physical tokens" (scratches on paper, holes on
a tape, events in a digital computer, etc.) that are

(2) manipulated on the basis of "explicit rules" that are

(3) likewise physical tokens and strings of tokens. The rule-governed
symbol-token manipulation is based

(4) purely on the shape of the symbol tokens (not their "meaning"),
i.e., it is purely syntactic, and consists of

(5) "rulefully combining" and recombining symbol tokens. There are

(6) primitive atomic symbol tokens and

(7) composite symbol-token strings. The entire system and all its parts
-- the atomic tokens, the composite tokens, the syntactic manipulations
(both actual and possible) and the rules -- are all

(8) "semantically interpretable:" The syntax can be systematically
assigned a meaning (e.g., as standing for objects, as describing states
of affairs).

Here are some example of symbol systems:

(a) The axioms and theorems of arithmetic of geometry

(b) Sentences in a book

(c) A computer programme

All share the property that they consist of a bunch of squiggles and
squoggles, combined according to rules, and able to be given a
systematic interpretation: they are "about" something, although the
shapes of the symbols are arbitrary, and the rules for combining them
apply only to their shapes (hence a and c are pure examples; b is
more complex, because the menaings of the symbols influence the
combinations, not just their shapes: but in what do those meanings

> At this stage I lost the battle trying to understand "the Numerical Example".

The numerical example was just meant to show you how syntax works. You
learn arithmetic and you are told this is a "0" and this is a "1" and
this is an "=" and "1 + 1 = 2" is ok, but neither "1 + 1 = 3" nor " = 1
3 +" is ok. All this is based on rules for combining symbols based on
their (arbitrary) shapes, not on the meaning of the symbols, though the
symbols CAN be interpreted as having meaning: "=" means =.

As other examples, think of how you can manipulate the beads on an
abacus to do arithmetic. Or how you do long division -- or, better,
factoring quadratic equations: remember (-b+/-SQRT(b**2-4ac))/2a ?
What can be a better example of symbol manipulation than that, when used
to factor equations of the form ax**2 +bx +c = 0? In that case you may not
even KNOW the intended meaning of the symbols, in which case you will do
the equation as a "cookbook recipe," which is exactly what a computer
does, and what Searle in the Chinese room does.

Yet though it's all just based on cookbook recipes (syntax) it can all
be given a meaningful, systematic interpretation: as nambers and the
roots of quadratic equations -- which, to use a dramatic example, might
save a life, in cancer chemotherapy calculations, for example.

> The end of the chapter describes various control schemes including
> Newell's novel, non-standard "production system" (p82-) which seemed to
> me to be more understandable if fitted into a specific imaginary
> example. I picked a runner on the starting block for an interpretation
> of the scheme.
> 1) the runner is on the block waiting for the go and in her "production
> system" there is a "workspace" which is a communication area and a set
> of "productions" which are modular condition-action pairs. The
> production system can only respond to a limited number of symbols (
> originating from the environment or some earlier production) and then
> evoke productions.
> 2) the starter pistol goes off and a symbol arrives in the workspace,
> is recognised by satisfying some condition required by a specific
> start-production, evoking the production which then results in the
> action of starting (or "goal" to start). One symbol could represent a
> whole group of other symbols to save limited workspace resources
> (chunking). Other modular productions may be taking place at the same
> or partially overlapping times.

That sounds like one possible interpretation of Newell's syntax.

> Looking ahead to the next few chapters, there is some hint of what
> comes next, ie functional architecture, which presumably is the
> infrastructure for the production systems??

The functional architecture is where cognition starts. What's above that
is cognition, what's below that is just hardware.

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT