Harnad S. (1993) Discussion (passim) In: Bock, G.R. & Marsh, J. (Eds.)
Experimental and Theoretical Studies of Consciousness. CIBA Foundation
Symposium 174. Chichester: Wiley
Page 15-16
Harnad (addressed to Tom Nagel):
But what makes you optimistic? Do you have any inductive grounds for
optimism? In your own writings, if I have not misunderstood them, you
specialize in providing negative analogies to suggest that precisely
the pattern of conceptual revision that has worked successfully in all
previous cases (e.g., matter, life) is doomed to fail in the special
case of mind. I construe the example of electricity that you have just
used as falling under that same DISanalogy. Some sort of unspecified
new concept is clearly needed here that you apparently think is
possible, yet in the past you have given every reason for believing
that such a concept is not possible.
To summarize your own published views [Nagel 1974, 1986] ever so
briefly: In all prior cases, successful reconceptualization has
involved replacing one subjective [1st person] view of an objective
[3rd person] phenomenon by another; that's what it is to have and
understand a new empirical concept. That was how we came to see
electricity, for example, as a flow of charge instead of an elementary
force, heat as average molecular energy, life as macrobiomolecular
properties, etc. In each successful case of reconceptualization,
appearances changed, but always to be replaced by further appearances.
The DISanalogy is that in the special case of mind we are instead
trying to replace SUBJECTIVITY ITSELF by something OTHER
than subjectivity, appearances by something other than other
appearances. There is clearly no precedent for such "bootstrapping"
in any of the other successful cases of scientific reconceptualization;
hence each prior case seems to add inductive evidence AGAINST optimism
in the special case of trying to reconceptualize mind as something
physical, functional, or what have you.
Nagel, T. (1974) What is it like to be a bat.
Philosophical Review 83: 435-451.
Nagel, T. (1986) The view from nowhere. New York: Oxford University Press
-------------------------------------------------------------------
P. 35 (continued from P. 34)
... namely, that this kind of thing (misinterpreted pain) has nothing to do
with the problem of consciousness.
Let me put it another way. Of course in one basic sense misinterpreted
pain has something to do with consciousness, but only in the sense that
even the capacity to write science fiction or to understand art, and so
on, have something to do with consciousness. These are all different
instances of the particular contents of consciousness. But the basic
problem of mind concerns how it is that any conscious content at all,
any qualitative experience, any subjectivity, can exist (and in what it
might consist, and how, and why).
Consider a cockroach that can feel pain but lacks all that higher-order
interpretative capacity. You may pull off its legs, and if it could
talk it would say "hurting is happening." Just a qualitative
experience, no other beliefs, nothing. Yet even in this lowly example
one is already facing the FULL-BLOWN problem of consciousness; you
don't need to invoke the rest of the Ramachandra (19XX) or Gregory
(19XX) phenomena you mentioned; you don't need all that about
second-order beliefs. All you need is the fact that that stuff even the
cockroach feels (otherwise known as "qualia," "experiences," "mental
states," with that content that we all know what it's like to be the
subject of) actually happens to exist in the world at all! That's the
REAL problem of consciousness. The rest is just the icing on the cake.
Request Ramachandra and Gregory references from Tony Marcel, who is the
one who cites them. -- SH
-------------------------------------------------------------------------
P. 52 - 54.
There is a methodological problem that I think is going to arise over and
over again at this conference, a from of self-delusion I have elsewhere
dubbed "getting lost in the hermeneutic hall of mirrors" (Harnad 1990,
1991). It again has to do with the primacy of subjective content in
putative physical or functional "explanations" of consciousness. I will
use an example (a computationalist model) which has not been proposed
here yet but which can stand in for all of the similar instances of
this particular methodological pitfall, whatever your favorite candidate
explanation of the mind might be.
Here is how everyone inadvertently but invariably cheats in a
functionalist (or physicalist) theory of consciousness: You offer a
purely functional story, but in interpreting it mentalistically you
simply let the phenomenological "flavor" slip in by the back door,
without admitting or even realizing it. Once it's in, however, all the
rest of the story of course makes perfect functional sense and squares
with our intuitions about consciousness too. That is why I call such a
construction a hermeneutic hall of mirrors: It is really just
reflecting back what you projected onto it by interpreting it
mentalistically in the first place; yet the effect is complete,
coherent and convincing -- as self-sufficient as a perpetual motion
machine (once you assume the little widget that keeps it going). You
take a perfectly objective story and simply allow it, without apology
or explanation, to be INTERPRETED as a subjective one; then of course
it all makes sense, and consciousness is duly explained.
This happens most commonly in computational models of cognitive
states: In reality, all they amount to is a lot of strings of
meaningless symbols that are systematically INTERPRETABLE as "the cat
is on the mat," etc., but once you actually baptize them with that
interpretation, the rest of it is simply self-corroborating (as long as
the syntax will bear the weight of the systematic interpretation) and
hence makes perfect functional sense: All the higher order sentences,
the sentences that are entailed by those sentences, etc., duly follow,
and all of them "mean" what they're supposed to mean, just as thoughts
do. The only trouble is that, apart from the hermeneutics, they're
really just meaningless "squiggles and squoggles" (as Searle 1980 would
call them). The same is true of neurological and behavioral states that
are interpretable as pain states. Abstain from hermeneutics and they're
just inputs, nerve janglings and outputs.
I hope it's clear that in reality all such cases just amount to
self-fulfilling prophecy. When you read off the interpretation that you
projected as a VINDICATION of the fact that you have explained a
conscious state, you keep forgetting that it's all done with mirrors
because the interpretation was smuggled in by you in the first place.
That's what I think is happening whenever you propose a functional (or
physical) account of mental states without facing the basic problem,
which is: How are you going to justify the mentalistic interpretation
other than by saying that, once made,
it keeps confirming itself? Mind is not just a matter of interpretation;
hence hermeneutics is just begging the question rather than answering
it.
Harnad, S. (1990) Lost in the hermeneutic hall of mirrors. (Invited
commentary on: M. Dyer, "Minds, Machines, Searle and Harnad")
Journal of Theoretical and Experimental Artificial Intelligence
2: 321 - 327.
Harnad, S. (1991) Other bodies, other minds: A machine incarnation of an
old philosophical problem. Minds and Machines
1: 43-54.
Searle, J. (1980) Minds, brains and programs.
Behavioral and Brain Sciences
3: 417-457.
--------------------------------------------------------------------
P. 78-79.
Harnad (to Dan Dennett):
May I introduce an intuition that lies behind this? I wanted to ask Dan
Dennett about the status -- in that scientific Utopia when all the
questions have been answered -- of our "Zombie" intuitions: intuitions
about possible "creatures" that could be behaviorally, functionally or
neurally identical to us, interpretable exactly as if they had qualia
(subjectivity, consciousness), yet in reality lacking them ("nobody
home"). These intuitions are not my favorites, but they do exist.
What I'd like to point out here is the significant but overlooked fact
that we do NOT have "quarkless Zombie" intuitions in the case of
matter-modelling (physics) analogous to these "qualeless Zombie"
intuitions in the case of mind-modelling.
Let me explain what I mean: In the case of mind-modelling, at the end
of that last scientific day, when you have a complete functional
explanation of all the empirical data on the mind, I can still ask what
reason I have to believe that you haven't just modelled a Zombie, a
functional look-alike, with no qualia. Yet I could not raise this same
question about quarks in the case of matter-modelling, despite the fact
that quarks are just as unobservable as qualia. For if quarks do still
figure in that Grand Unified Theory when physics reaches a Utopian
state of completeness, we will NOT have intuitions that there could be
an empirically indistinguishable Zombie universe in which, although the
theory posits that quarks exist, there are in reality no quarks. Since
quarks, like qualia, are objectively unobservable, that cannot be the
reason for the difference in intuitions in the two cases. Unlike
qualia, however, quarks would be functionally NECESSARY to the
predictive and explanatory power of the Utopian physical theory we are
imagining -- without positing them, the theory could not explain all
the data. In our Utopian biobehavioral/cognitive theory, by contrast,
qualia will always be OPTIONAL (as the hermeneutic hall of mirrors
shows), because (assuming the theory would be a materialistic rather
than dualistic one) qualia could not play any independent causal or
functional role of their own in it.
Hence this will always leave room for qualeless Zombies. Moreover,
whereas in Utopian physics we know that (by definition) the difference
between a quarkful universe and a quarkless Zombie universe would be a
difference that did not make a difference, we also know from our own
1st-person case that the difference between a qualeful universe and a
qualeless Zombie universe would be just about the greatest difference
we are capable of conceiving. There is, I think, something special
about this, and I take it to be a mark of the mind/body problem.
------------------------------------------------------------
P. 126 - 127
Let me push this further. Let's focus on syntax, on whether we can talk
about N very different physical systems that all implement the same
formal syntax. You may call that shared property "observer-relative";
maybe that's not what I'm challenging. But there IS some property that
those N systems all share (and that is not shared by every other
system). It is THAT property -- shared by all implementations of the
same computer program -- that I (in Harnad 1989) construed to be the
real target of your Chinese Room Argument (Searle 1980): That wasn't an
incoherent target, fortunately, and hence your Chinese Room Argument
had not been a refutation of an incoherent claim.
Harnad, S. (1989) Minds, machines and Searle.
Journal of Experimental and Theoretical Artificial Intelligence
1: 5 - 25.
-----------------------------------------------------------
P. 162
The OBJECT of the experience is out there, but the EXPERIENCE is in
here (in the head). When one asks where the experience is, one is not
asking about the locus of the object of the experience but about the
locus of the experience. You seem to be conflating the two.
----------------------------------------------------------
P. 204
If Dan's interpretation were valid, this wouldn't be counterevidence
against it.
I would like to ask Dan what this thing is that happens and then
vanishes without a trace? You do not seem to be saying merely that it
vanishes without a trace after it happens if it fails to be
consolidated, but that even at the instant when "it" allegedly happens,
it somehow has no conscious manifestation! But then why would you call
"it" a conscious experience at all?
----------------------------------------------------------------
P. 222
I think I have figured out a way to restate the point about timing:
Suppose that the CONTENT of the event we are trying to clock is: "This
SEEMS to be occurring at time t." Let that be the EXPERIENCE we are
talking about, the instant of the seeming-to-occur-at-t.
But what we are really interested in is the true clocktime (possible
t') of the moment of that SEEMING, and not the time t that was merely
its CONTENT. When Ben is looking for the exact moment the experience of
willing a movement occurred, he too is looking for the clocktime of the
SEEMING, not the clocktime of its content.
---------------------------------------------------------------
P. 223
There is a good example in spatial location. I have read that patients
with blindsight can localize objects (say, on the left or right) because
they feel some sort of inclination or urge to orient toward them.
---------------------------------------------------------------
P. 249
It seems to me that if (according to the best current empirical theory
in linguistics) "move alpha" is a useful, predictive, explanatory
descriptor of a process going on in his head and playing a role in
generating his linguistic capacity, then it surely does not matter
whether or not it is "mental" (i.e., conscious or potentially
conscious). "Move alpha" could be a correct "higher-level description
of neural activity" either way.
----------------------------------------------------------------------
P313-314
This point is again related to hermeneutics. I am reminded that the
reason I left the field of laterality (Harnad et al 1977) was the
disproportionate importance that was assigned in that field to what on
the face of it look like trivial data. Consider, for example, a 50 ms.
speed advantage that may occur when some kind of stimulus is presented
in the left rather than the right visual field in some perceptual task.
If the very same 50 msec advantage had instead been exhibited in the
running speed of one group of kids compared to another (who were given
different motivational instructions, say), the effect would rightly be
dismissed as trivial. But when they can be annointed with
the mystique of being related to "the brain" in some way (even
something as vague as being on the left or the right), the significance
and explanatory power of such effects are immediately elevated by
interpretations (in terms grand left-brain/right-brain theories) vastly
out of proportion with their actual empirical content.
Now, by way of analogy, suppose that instead of being recorded from the
brain, Bill Newsome's findings had been read off an oscilloscope stuck
into a simple artificial optical recognition system that was only
capable of detecting orientation; and suppose one found unit in this
system a unit that was selectively responsive in just the way Bill has
described. One would not feel one had made an inroad on the mind/body
problem (would one?). So why should one feel that one has done so when
one happens to find this in the brain?
Please note that I am certainly not suggesting that Bill's finding is
trivial as neurophysiology (as some of the laterality findings are).
But it certainly does not seem to cast any new conceptual light on the
mind/body problem along the lines Tom Nagel has here expressed optimism
about eventually achieving, as we get closer to scientific Utopia.
Harnad, S., Doty, R.W., Goldstein, L., Jaynes, J. & Krauthamer, G.
(eds.) (1977) Lateralization in the nervous system. New York: Academic
Press.
----------------------------------------------------------------
P. 331
Humphrey: ...be a brute fact of nature.
Harnad:
In any case, puzzles are not resolved by further puzzles. The strength
of quantum mechanics (QM) is the broad range of empirical data it
successfully predicts and explains. The quantum puzzles, on the other
hand, are still a source of frustration and perplexity to most
physicists who give them any thought. The puzzles are certainly not the
TRIUMPHANT aspect of quantum mechanics. Rather, one reluctantly
reconciles oneself with them in exchange for QM's enormous empirical
power. Physicists would no doubt be delighted to jettison all the
quantum puzzles (duality, complementarity, uncertainty) for a Grand
Unified Theory with all the power of QM but none of its paradoxes.
Those who would like to import those paradoxes alone to another field
are trading in QM's weaknesses rather than its strengths.
---------------------------------------------------------------
P. 338 (continued from 337)
structure we eventually discover that is necessary and sufficient to do
all the vision must in addition square with the subjective phenomenology
of vision, and all the rest.
So I want to call into question that slight epiphany that we get when
we manage to capture a necessary/sufficient unit such as Bill
Newsome's. I'm suggesting that our epiphany is spurious. Why should our
reaction to this unit be any different from our reaction to a clear
nonstarter, such as the computer vision system I mentioned earlier?
Why do we go "Aha" with this unit and not with a functional equivalent
that has essentially the same properties but neither aspires to be human
nor resides in one?
Let me go a bit further. Not only do I fail to see any justification
for an "Aha" in the one case when it is so clearly unjustified in the
other, but I don't even see anything suggesting the ROAD leading to an
"Aha" in anything along these lines (including even Maggie's
"isomorphisms" and Dan's "heterophenomenology").
-----------------------------------------------------------------------
P. 348
Is there eny evidence that right-hemispherectomized patients (young or
old, early or late) are more literal minded about input than
left-hemispherectomized patients, along the lines of the effects in the
left and right hemispheres on the split-brain patient?
------------------------------------------------------------------
P. 372 - 373
The revision that Dan Dennett suggested is already there when Jeffrey
Gray refers to "a process that can be treated as conscious or not." The
simplest way of putting it is that a neural process will always be
INTERPRETABLE either way -- as conscious or not. It's up to you.
There's no objective way to settle the matter, and, more important, it
really makes no difference one way or the other to the empirical
success of the cognitive neurobehavioral theory of which the process in
question is a component. By way of contrast, this is decidedly NOT the
case with, say, a biological theory of LIFE. To be sure, whether or not
something is alive or not is NOT just a matter of interpretation (just
as it is not a matter of interpretation whether something is conscious
-- has a mind -- or not). But in the case of life, once all the
objective empirical questions have been answered, the FURTHER question
of whether the thing in question is merely interpretable-as-alive or
"really" alive (whether it's a true biome, or an empirically
indistinguishable abiotic Zombie biome)
no longer has any content.
Its content has been exhausted by the answers to all the empirical
questions. In the case of the mind, by contrast, the counterpart of
that last question still has content. You could have Jeffery's
integrative neuroscience empirically completed and still say on the
last day that it's interpretable this way or that, mindfully or
mindlessly; yet we know that EVERYTHING -- the very existence of our
mental lives -- rides on the validity of one of those interpretations
and not the other, even though all the empirical theory and data are
equally compatible with either one. To put it even more succinctly: the
Utopian theory of life can answer all the substantive questions about
life vs. lifeless zombie lookalikes with nothing left over or out,
whereas the Utopian theory of mind can answer none of the substantive
questions about mind vs mindless zombie lookalikes, and hence leaves
everything out.
--------------------------------------------------------------------------
P. 374
But there is a fundamental disanalogy between (mindless) "zombie mind"
and (lifeless) "zombie life." Your thought experiment applies to the
latter, but not the former. There IS an extra fact of the matter about
mind (and we each know EXACTLY what it is) but NOT about life (except
insofar as there is an element of animism implicit in vitalism, which
simply reduces it all to the mind/body problem again; Harnad 1992).
[Harnad 92 reference is on P. 130]
---------------------------------------------------------------------
P. 379 - 80
Dan Dennett made a point that wasn't answered, I think, with respect to
what he described as "heterophenomenology" (the complete functional
explanation of not only everything we do, but also how the world looks
to us, in every one of its qualitative details). Another way of putting
Dan's point is that once you have that complete heterophenomenology,
which we would all like to have (but which Jeffery Gray and I would be
inclined to say is not enough), there would be an awful lot of
apparently superfluous stuff in there if it indeed failed to capture
REAL phenomenology. "If it's not the real thing, but merely
"interpretable-as-if," then what is all that superfluous stuff that's
so systematically INTERPRETABLE as our complete phenomenology doing
there at all?" Dan could rightly ask. It would seem to be multiplying
entities grossly beyond necessity to have all that interpretable
structure in a Zombie that had no real mental life at all!
-------------------------------------------------------------------
Nagel: "Nobody is denying ... is it an inference or not?"
Harnad:
No, the issue is not just an EPISTEMIC one, concerning sufficiency of
grounds. There is an ONTIC question about both the REAL phenomenology
and the (ostensibly superfluous) INTERPRETABLE-AS-IF "phenomenology" in
Dan's complete heterphenomenology. It's already a mystery why we are
conscious at all, rather than just Zombie lookalikes (because for the
Blind Watchmaker -- a functionalist if ever there was one, and surely
no more a mind-reader than we are -- functionally indistinguishable
Zombies would have survived and reproduced just as well). But Dan's
heterophenomenology seems to force a second ontic dangler on us,
namely, all of that inner stuff a system would have to have in order to
correspond systematically to all the qualitative minutiae of our
subjective lives. We know why a Zombie wouldn't bother have REAL
qualia; but why would it bother to have all those pseudoqualia either?
Could it be that every qualitative difference in our repertoire
subtends a potentially adaptive functional difference?
-------------------------------------------------------------------
P. 414 - 415.
Ordinary physics is the most instructive: (This time, instead of using
quarks, I'll use superstrings.) A complete physical theory at the end
of the day will also contain some funny things. Suppose it contains
superstrings. With superstrings, it can account for all the data.
Suppose you try to "deinterpret" the theory by asking "What if, in
reality, there are no superstrings?" The theory collapses; it does not
work (i.e., it does not successfully predict and explain all the data)
unless there are superstrings. You can say "Maybe there's a universe in
which "zombie superstrings" replace "real superstrings," but that's not
interesting any more. The superstrings or the zombie-superstrings are
needed, irrespective of what you want to call them, otherwise the theory
doesn't work. By contrast, no working physical theory requires an
"elan materiel" in order to work, any more than any working biological
theory requires an elan vital, such that, if you take it out, the theory
collapses. That's why biology is no different from physics or
engineering in this respect.
But now what about qualitative content (qualia) in a complete
neurobiological theory? I think Tom (Nagel) and John (Searle) give away
the store when they say that if it's indeed a COMPLETE neurobiological
theory then that's all they want. I am still asking this separate
question: Is the mentalistic interpretation of the theory -- positing
and drawing upon the qualia -- optional or not? If it's not optional,
then there's nothing special about the case of mind science either (but
then I'd like to know precisely WHY it's not optional). But if it IS
...