Re: Pylyshyn: Against Imagery

From: Harnad, Stevan (
Date: Mon Jan 15 1996 - 21:40:48 GMT

> From: "Matsers, Kate" <>
> Date: Tue, 28 Nov 1995 05:39:21 +0000
> Pylyshyn introduces the concept of categorisation when he says that
> human characteristics can not be wholly described by listing the
> groups in which we (humans) can be placed, (e.g. vertebrates,
> cognisers etc.). At this stage he does not actually mention that we
> categorise the information we process but I can see the relevance in
> the fact that we cannot be described by categories alone, surely if we
> think in categories alone we will therefore never be able to fully
> understand ourselves?

Pylyshyn's book is not really about categorisation, and it is only about
understanding ourselves inasmuch as understanding how we think is surely
part of understanding ourselves.

> ZP> What makes it possible for humans (and other members of the
> ZP> natural kind informivore) to act on the basis of representations is
> ZP> that they instantiate such representations physically as cognitive
> ZP> codes and that their behaviour is a causal consequence of operations
> ZP> carried out on these codes.

What he means is that what goes on in our heads is computation, i.e.,
symbol manipulation, and that the symbols in the head represent things
on the outside of the head, and can guide what we do, based on the
internal manipulations of those symbols (computation again).

> Pylyshyn then goes on to introduce the concept of computation as a
> model for cognition. He explains that if a computer program can be
> viewed as a model of cognition, the program must correspond to
> the process people perform.

The symbolic or computational or "software" level of description of a
computer is a special level of its own. It can be described
independently of the physical hardware on which the computations are
running. It is formal, a code, symbols, and symbol manipulations. The
same programme can be run on many radically different machines.
Pylyshyn thinks that cognition is a form of computation, i.e. that to
understand how the mind works, we need only find out what the programme

> That is to say that the processes are formed in the same way. He
> explains that there is strong equivalence between human cognition and
> computation but that there are independent constraints upon both
> computation due to its mechanical foundation and in humans due to
> their biological structure. He calls this the functional architecture
> and distinguishes it from rule governed or representation governed
> processes.

He might as well just have called it hardware differences, because
that's all they are. As a computationalist, he stresses that the
hardware is irrelevant; what is relevant is the software: which
programme, which symbol system is running to produce mental states.

> He defines three levels of explanation that fall within cognitive
> science: the functional architecture, the nature of the codes (their
> symbol structures) and their semantic content.

He might just as well have said the hardware, the software, and the
interpretation of the software.

> Pylyshyn explains that this book will not appeal to philosophers
> because it will not ramble and discuss things.

We'll see about that (whether he rambles, that is)...

> In this section Pylyshyn introduces the concept of scientific
> vocabulary the fact that two or more science (e.g. physics, biology,
> psychology etc.) can explain the same phenomena but will use very
> different terms and perspectives to do so.
> Pylyshyn then says that folk psychology is the best method of
> predicting behaviour;

What he means by "folk psychology" is the way we usually explain why
and how people (and animals) do what they do: "Why did the chicken cross
the road"? "Because it WANTED to get to the other side; because it
BELIEVED there was food there; because it WANTED the food" etc.

The language of thoughts, beliefs, wants, desires, feelings, etc. is
"mentalistic." It is based on the shared knowledge we have that this is
indeed what is going on in our heads.

But that's all just introspection and naive realism. What REALLY goes on
in the head may not be that kind of thing at all. Science might find
that folk psychology was wrong, just as it found that folk physics
(according to which, for example, things fell because of their
"heaviness," rather than the gravitational attraction between a larger body
and a smaller body) was wrong.

> The example is, basically, a car swerving to avoid a pedestrian, the
> car hitting a pole and the pedestrian running to the 'phone box and
> dialling the numbers 9 and 1 (note that (911 is the American emergency
> number).
> The point that Pylyshyn develops here is that the scientific
> perspective, (and the vocabulary attributed to that science) that is
> used to explain the situation affects the explanation that is
> produced, and therefore which question is answered. For example, if
> we concentrate upon the physical forces involved in the crash we would
> not be explaining why the pedestrian ran to the 'phone box.

More specifically, a geometric or mechanical description of everything
that went on in a video of that scene would be like the description of a
shot in a billiard game. It would not capture things like WANTING to
avoid a pedestrian, or WANTING to call for help because you KNOW that
911 will send help, etc. Pylyshyn is trying to show here that a purely
physical, mechanical description of the scene would fail to capture the
important regularities, whereas the mentalistic description does capture
them. He will go on to say that the mentalistic description can then be
understood in terms of internal representations -- symbol structures and
processes, computations -- corresponding the those beliefs, desires,

> There is then a list of the cognitive actions the pedestrian
> onlookers go through during the accident. They provide a definition
> of the pedestrian perception of the event and the why s and what of
> this actions. These cognitive actions can be called intentions, and
> are described using and intentional vocabulary.

It is important not to mix up two related but distinct uses of
"intention" in this area: We all know what we mean by doing something
intentionally vs. unintentionally. Intention in this sense means
"deliberate, purposeful" as opposed to "involuntary" or accidental.

This distinction is definitely a mental one, but Pylyshyn, and the
philosophers who originated the related but different use of "intention"
that is intended here [note the word I just used!] are referring to
intended MEANING rather than intended action. In fact, you can almost
always replace "intentional" by "meaningful" in this context.

Supposing I say: "I went to the bank today" and you ask "Did you take
out any money?" I reply: You misunderstood me. My intended meaning for
"bank" was not the financial institution, but the side of a river: I
went for a swim." That is the kind of thing you should keep in mind when
Pylyshyn and others speak about "intentional" vocabulary. "Bank" is
ABOUT something, but it's not about the financial institution in
this case, but about the side of the river.

All the other mental terms used earlier -- believing, thinking, knowing --
they all have an object, they are ABOUT something. You believe that it
is dark outside, you think that it is tuesday, you know that you have a
toothache. This is all intentional language, describing mental states
that have the peculiar property that they are ABOUT something, they have
an "intended meaning." In the physical description of the event Pylyshyn
describes, if we leave out the intentional aspects, then it is just a
mechanical interaction in space, like a billiard ball interaction.
Nothing is about anything, it just happens.

So Pylyshyn's point is that people only make sense if you describe them
in intentional terms. That's what makes them different from other
objects, even moving objects, like cars and trains, and perhaps even
very sophisticated moving objects, like computers and robots: But
Pylyshyn wants to say that people ARE like these more sophisticated
moving objects (computers don't move much, but they can guide the motion
of their peripheral parts), and that what they have in common is the
internal representations that correspond to these intentional states,
such as believing, thinking or knowing. He thinks these are all forms of
computation, and can only be described at the computational level,
rather than the physical, causal level of hardware.

(By the way, the equally mental vocabulary of "wanting" and "seeing" and
"feeling," you will find, is not as easy to imagine to be merely
computational as the "colder" intellectual vocabulary of "believing,"
"thinking" and "knowing." And in the end, even the "colder" vocabulary
may turn out to have to be grounded in the "hotter" one, if its meanings
are to stand on their own, rather than requiring an outside

> Once the situation is explained using an intentional vocabulary the
> situation is understandable, and we could then go on to use different
> scientific vocabularies to deduce the other perspectives.

And what Pylyshyn intends to do is to go on to "reduce" intentional states
(= mental states) to computational states.

> When we look for an explanation are we merely looking for an account
> that makes sense, using relevant generalisations?

Good question. Usually we want an account that makes sense, but also an
account that is CORRECT. Remember the anecdote about the man who knew
the secret of the universe when he inhaled laughing gas, yet when it
wore off, all he had written to "explain" it was "the smell of petroleum
pervades throughout." That had made sense to him while he was under the
influence, but it no longer made sense once the laughing gas wore off
-- which just goes to show that "making sense" isn't enough.

What more can an explanation do besides make sense? It can predict
things (correctly). It can give you a causal mechanism (say, a law of
nature, like F = MA) that not only predicts what will happen, but also
allows you to model it, control it, and connect it to the causal
mechanism of other things, eventually everything.

So besides making sense, an explanation should predict, give a causal
mechanism, and perhaps help you control the thing that is being
explained, using the causal mechanism (which is another kind of
prediction). It also follows from this that an explanation should be
testable (and making predictions are the usual way of testing it), for
if it is untestable, there's no way you can know whether it's right or
wrong, no matter how much sense it makes.

And even "making sense" is ambiguous, because, besides meaning that it
sounds convincing and plausible, "making sense" can also mean that it is
logically consistent. For these two things can be at odds: Something
that makes sense in the sense of sounding plausible to us, may
nevertheless not make sense, in the sense of being logically consistent:
It may contain fatal logical contradictions. An example of that is
trisecting an angle with compass and straightedge: It made sense to
people to try to do it, and they still try to do it, even though it has
been proved formally to be impossible, logically contradictory.

Now Pylyshyn is aiming for a practical KIND of explanation for mental
states: a computational one. And he does indeed hope to give one that
makes correct predictions, provides a causal mechanism (when the
computations are implemented on hardware) and even allows us to have a
causal influence over mental states (e.g., cognitive penetrability).

> Cognitive systems consider what could have been as well as what was.
> This is the difference between description and explanation.

Well, this is a bit of philosophers' slogan: Philosophers love
"counterfactuals" -- statements of the form "If you had heated that
bottle, it would have exploded." No need to worry too much about that:
"Predictive" has that covered: "If you heat that glass, it will explode" is
good enough...

> Capturing Generalisations.
> The point here is that cognitive terms are not merely heuristic but do
> explain something that biological and behavioural approaches do not.
> That is, cognitive terms enable us to capture generalisations, and
> interpret things in the shape of categories we have stored as
> representations within us.

I would say Pylyshyn does some of the rambling here that you say he will
not do: It's enough to say it once: If you don't describe people in
terms of mental states, intentions, they don't make sense and you can't
make the right generalisations and predictions; these are not contained
in the purely physical, billiard-ball description of how people move
around in the world. But saying it once clearly is probably as much as
it needs (I'm not blaming you here, Kate, but Pylyshyn, for using too
many words...)

> Behaviourism could only work here if we
> had been in a very similar situation to have our behaviour reinforced.

That's right. Describing us as pushed and pulled by reinforcements is
not more helpful, predictive, or explanatory than describing us like
billiard balls being pushed around by collisions.

> The generalisations are aided by expectation, and are internally
> processed. Pylyshyn uses the example that if the pedestrian had been
> told that the accident was a rehearsal for a T.V. show they would not
> behave in a way that fitted the perceptual stimulus (and thus
> generalisation of response) "accident".

Yes, mentalistic interpretation IS interpretation, and often you have a
choice of more than one interpretation of what a scene is really ABOUT.

> ZP> ... the relation between conditions and actions i s seen as
> ZP> meditated by beliefs as well as, perhaps, considerable reasoning or
> ZP> cognitive processing rather than merely being linked by nomological
> ZP> laws. It is this remarkable degree of stimulus independent control
> ZP> of behaviour that has been the Achilles' heel of behaviourism.

"Nomological laws" just means the usual causal laws of physics (e.g.,
F=MA), which are enough to predict and explain the "behaviour" of
billiard balls, but not that of organisms.

Looking forward to the rest of the installments of your summary of

Chrs, Stevan

This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT