Harnad on Dennett on Chalmers on Consciousness

Harnad on Dennett on Chalmers on Consciousness

The following are Quote/Comments on Dan Dennett's forthcoming paper, "The Fantasy of First-Person Science," based on his recent debate with Dave Chalmers  about consciousness http://ase.tufts.edu/cogstud/papers/chalmersdeb3dft.htm

Quotes are all from Dan's draft.


Stevan Harnad
Intelligence/Agents/Multimedia Group
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton

The mind/body problem is the feeling/function problem (Harnad 2001). The only way to "solve" it is to provide a causal/functional explanation of how and why we feel...

Descartes: How is it possible for me to tell whether a thought of mine is true or false, perception or dream?
Why, oh why do we keep conflating this question, which is about the uncertainty of sensory information, with the much more profound and pertinent one, which is about the functional explicability and causal role of feeling?
Kant: How is it possible for something even to be a thought (of mine)? What are the conditions for the possibility of experience (veridical or illusory) at all?
That's not the right question either. The right question is not even an epistemic one, about "thought" or "knowledge" (whether veridical, illusory, or otherwise) but an "aesthesiogenic" one: How and why are there any feelings at all?
Turing: How could we make a robot that had thoughts, that learned from experience (interacting with the world) and used what it learned the way we can do?
"Thoughts" is 100% equivocal. If it just means "internal goings-on that generate certain outputs in response to certain inputs," then no problem (and no problem solved!). But if "thoughts" means "felt thoughts," then you might as well call them "feelings" (what it feels-like to think and reason is just one instance of the multiqualitative world of feelings; there's also what it feels-like to see, touch, want, will, etc.):
"On the face of it, an emotion is just a synonym for a certain kind of feeling. (Other kinds of feelings would be sensations like seeing something blue or hearing something loud, hybrid emotion/sensations like feeling pain, desire-states like wanting something, psychomotor states like willing an action, or complex feeling/knowing-states like believing, doubting, or understanding something.)" (Harnad 2001) http://www.cogsci.soton.ac.uk/~harnad/Tp/bookrev.htm
So learning is fine, lovely. But why felt learning? Unless you focus on the feeling, the question is, as usual, begged. If one uses "feelings" (as one should) instead of the 100% equivocal "thoughts," one cannot even reformulate the Turing version of the question in such a way as to make it sound as if it makes any sense. It instead becomes the merely behavioral/functional question -- and answer -- that it is and ought to be. And the feelings clearly have no role to play in any of it.
(A) Cool! Turing has found a way to actually answer Kant's question!
Kant's was not the right question, but Turing's is an answer neither to Kant's question nor to the right one (about feelings).
(B) Aaaargh! Don't fall for it! Youre leaving out . . . experience!
You are leaving out feeling. (Experience, like thought, is 100% equivocal. The relevant bit is felt experience -- not just "had" experience, or "real-time-past" experience, or "functioned through" experience, or "processed" experience, or data.)
we are robots made of robots; we're each composed of some few trillion robotic cells, each one as mindless as the molecules they're composed of, but working together in a gigantic team that creates all the action that occurs in a conscious agent.
No doubt. The persistent niggler, though, is how and why all that admirable hierarchical Turing function should be felt... Hand-waving does not answer that question (even if it is indeed true, as I believe it is, that feelings must piggy-back, somehow, on T3-power (the robotic Turing Test) (Harnad 2000).

Unlike you, Dan, I stand ready to admit that neither I nor anyone else has even a clue-of-a-clue about how one could cash in that "somehow" functionally. Hand-waving -- emergence, giant cooperative entities consisting of dumb homunculi that "add up" to feeling agents "somehow" -- just won't cut it. And it will not cut it until and unless you can cash in that "somehow" causally/functionally, and not have the feelings slip through the mechanism as entirely superfluous -- as they invariably do whenever you actually try to give them any causal role of their own: For the causal role always turns out to perform itself perfectly well without any hint of feeling (thank you very much!), and the fact that that causal mechanism is also a feeling mechanism is just a just-so story, insofar as the causality itself is concerned: The fact that that story could also happen to be true does not help! It's the "somehow" that needs to be cashed in, causally, and "just-so" won't do it...

Turing's great contribution was to show us that Kant's question could be recast as an engineering question. Turing showed us how we could trade in the first-person perspective of Descartes and Kant for the third-person perspective of the natural sciences and answer all the questions without philosophically significant residue.
And at the same time trade in a question about feelings for a question about something else: I/O capacity. Good stuff, but another matter entirely.

That they correlate (feelings and function) is an interesting fact. Explaining how and why is another matter. Turing does not touch that at all (Harnad 2000a).

David Chalmers is the captain of the B team, (along with Nagel, Searle, Fodor, Levine, Pinker, Harnad and many others). He insists that he just knows that the A team leaves out consciousness.
David might know it's consciousness, but I know it's feelings.

Consciousness, being half-epistemic, like thought, is equivocal. This is just about feelings. Aboutness has nothing to do with it. It's just the how/why of feeling that the A Team (and everyone else who has a go) invariably leaves out. To not leave it out would be to answer the simple question: "How and why do T3 robots like ourselves feel?" Why don't they just go about their Turing business (including the emailing you and I are doing right now) zombily? Why feelingly?

There can't be zombies, you reply? But I agree! All I am asking is how and why not!

Tell 'em,
Show 'em,
Rah Rah Rah!
Do or Die!
Winner must tell
How and Why!

It doesn't address what Chalmers calls the Hard Problem. How does he know? He says he just does. He has a gut intuition, something he has sometimes called direct experience. I know the intuition well. I can feel it myself.
I don't know what Dave's intuition is. But I can tell you that until you explain why and how a pinch hurts, the game's not won. (That is does hurt, and that that hurting correlates perfectly with some functional story, is not the how/why explanation we were seeking...)
When I put up Turing's proposal just now, if you felt a little twinge, a little shock, a sense that your pocket had just been picked, you know the feeling too. I call it the Zombic Hunch (Dennett, forthcoming). I feel it, but I don't credit it.
Am I appealing to the Zombic hunch when I ask why a frog should feel something when you shock its foot, rather than just going through the familiar functional (Turing) nociceptive story?

But my "Zombic hunch" is not that there could be a Turing-equivalent frog that did not feel! My Zombic "hunch" is that I know a how/why explanation when I see/hear one, and there's none in sight for how and why the frog either isn't or cannot be a Zombie. It may very well be the case that it cannot be. But I want to know how/why (and not just "that," or "just-so")!

I figure that Turing's genius permitted him to see that we can leap over the Zombic Hunch.
Nothing of the sort! It was the equivocation on "thinking" and "intelligence" (so that eventually he suggested discarding the notions altogether), plus the equivocation on the epistemic questions (Descartes' and Kant's, as you introduced them) that allowed Turing to slough it all off as just leading to solipsism, which makes further discourse unprofitable...
TURING: "This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe 'A thinks but B does not' whilst B believes 'B thinks but A does not,. instead of arguing continually over this point it is usual to have the polite convention that everyone thinks."

Turing was a genius, but not for this one! If Turing had instead said:

*TURING: "I don't think we can make any functional inroads on feelings, so let's forget about them and focus on performance capacity, trusting that, if they do have any functional role, feelings will kick in at some point in the performance capacity hierarchy, and if they don't they won't, but we can't hope to be any the wiser either way"
then he would have been at least a realist, and squarely facing the hard question.

But that is called side-stepping, not "leaping over"...

We've learned to dismiss other such intuitions in the past: the obstacles that so long prevented us from seeing the Earth as revolving around the sun, or seeing that living things were composed of non-living matter. It still seems that the sun goes round the earth, and it still seems that a living thing has some extra spark, some extra ingredient that sets it apart from all non-living stuff, but we've learned not to credit those intuitions.
Correct. Because in those cases we have a causal/functional explanation of what is actually the case, and the data supporting it. Hence there we have no problem with adjusting our beliefs to the how/why explanation, even when it is at odds with appearances.

But that is why these familiar and oft-repeated analogies are disanalogies. Because the requisite how/why causal/functional explanation is not only absent in the case of feelings, but there are good a priori grounds for believing that it is not even possible (for methodological reasons).

But at best, you have only a counterfactual or future-conditional statement here: "If and when someone ever does come up (mirabile dictu) with a how/why explanation for feelings, as they did in those other cases, then we had better be ready to adjust our beliefs to it."

I'm ready.

Now let's hear that explanation (and how it successfully navigates the [reckless] Scylla of Telekinesis and the [feckless] Charybdis of Epiphenomenalism) (Harnad 2001)!

1. Are you sure there is something left out?

In Consciousness Explained, (Dennett, 1991) I described a method, heterophenomenology, which was explicitly designed to be the neutral path leading from objective physical science and its insistence on the third- person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences, while never abandoning the methodological principles of science. (CE, p72.)

Already sounds like it's headed into the shadows of equivocation: How is it going to account for feelings?
How does it work? We start with recorded raw data. Among these are the vocal sounds people make (what they say, in other words), but to these verbal reports must be added all the other manifestations of belief, conviction, expectation, fear, loathing, disgust, etc., including any and all internal conditions (e.g. brain activities, hormonal diffusion, heart rate changes, etc.) detectable by objective means.
Sounds like the familiar disjoint -- yet (inexplicably) correlated -- function/feeling family: Behavior, brain function, and all manner of structural and functional substrate on one side, and what they feel like on the other. Now the picture is a "hetero" one alright. There's both "kinds" of stuff, and they are 100% correlated.

But now what?

I guess I should take some of the blame for the misapprehension, in some quarters, that heterophenomenology restricts itself to verbal reports. Nothing could be further from the truth. Verbal reports are different from all other sorts of raw data precisely in that they admit of (and require, according to both heterophenomenology and the 1st-person point of view) interpretation as speech acts, and subsequent assessment as expressions of belief about a subjects private subjective state. And so my discussion of the methodology focused on such verbal reports in order to show how they are captured within the fold of standard scientific (3rd-person) data. But all other such data, all behavioral reactions, visceral reactions, hormonal reactions, and other changes in physically detectable state are included within heterophenomenology. I thought that went without saying, but apparently these additional data are often conveniently overlooked by critics of heterophenomenology.
Dear Dan. All bodily and molecular motions are there and available. And obviously 100% coupled with the feelings. The question (I hate to be repetitious) is how and why?

You are describing an empirical psychophysical data-gathering paradigm for getting all the measurable ("3rd person") correlates of feelings. Fine.

But until further notice, the only one who can actually feel the feelings themselves is the party of the first part, the feeler (the "1st person").

So we have this hetero-soup, with the psychophysicist measuring all manner of behavioral and neural and functional correlate (including predictive functional/computational modeling) of feeling, and then we have the feelers who are actually doing the correlated feeling. Then what? What do we do with this heteromix (or is it miscegenation?)?.

For the functional part will continue to hew only to its functional drummer. All those behaviors and neural processes will have their functional and adaptive causal explanations of the usual sorts. But their felt correlates will not. So where does that get us?

From the recorded verbal utterances, we get transcripts (e.g., in English or French, or whatever), from which in turn we devise interpretations of the subjects speech acts, which we thus get to treat as (apparent) expressions of their beliefs, on all topics. Thus using the intentional stance (Dennett, 1971, 1987), we construct therefrom the subjects heterophenomenological world. We move, that is, from raw data to interpreted data: a catalogue of the subjects convictions, beliefs, attitudes, emotional reactions, . . . (together with much detail regarding the circumstances in which these intentional states are situated),
This is the equivocation again. It is not the aboutness we are after, it is the sentience. Where does the fact that there are feelings correlated with all this come into the picture: how, and why?

"emotional reactions"? Gimme a break! You have reactions, and you have correlated feelings. Let's not make Damasio's Error (or perhaps W. James's) again, and conflate the emotion with the motion...

but then we adopt a special move, which distinguishes heterophenomenology from the normal interpersonal stance: the subjects beliefs (etc.) are all bracketed for neutrality.
What this means needs to be explained here, before going into its why's...
Why? Because of two failures of overlap, which we may label false positive and false negative. False positive: Some beliefs that subjects have about their own conscious states are provably false, and hence what needs explanation in these cases is the etiology of the false belief.
This is equivocation on "Descartes/Kant" again: We are not interested in whether your toothache was real or psychosomatic, or even if your tooth was hallucinated, nor in the conditions under which these various things may or may not happen or be predicted. We are interested in how/why they feel like anything at all.
For instance, most people naive people think their visual fields are roughly uniform in visual detail or grain all the way out to the periphery. Even sophisticated cognitive scientists can be startled when they discover just how poor their capacity is to identify a peripherally located object (such as a playing card held at arms length). It certainly seems as if our visual consciousness is detailed all the way out all the time, but easy experiments show that it isn't. (Our color vision also seems to extend all the way out, but similar experiments show that it doesn't.) So the question posed by the heterophenomenologist is: Why do people think their visual fields are detailed all the way out? not this question: How come, since peoples visual fields are detailed all the way out, they cant identify things parafoveally?
Completely irrelevant, and we are well on the way to begging the question. You are back on (irrelevant) conditions for veridicality, whereas what is at issue is the how/why of feeling anything at all...
False negative: Some psychological things that happen in people (to put it crudely but neutrally) are unsuspected by those people. People not only volunteer no information on these topics; when provoked to search, they find no information on these topics. But a forced choice guess, for instance, reveals that nevertheless, there is something psychological going on. This shows, for instance, that they are being influenced by the meaning of the masked word even though they are, as they put it, entirely unaware of any such word.
"as they put it"? But why pick equivocal cases where we really aren't quite sure whether the Ss are feeling anything at all? Why not pick an open-and-shut case like feeling or not feeling warm-now? Not thermosensitivity or thermoregulation or thermolucution. Not being warm now (that's function); not being ready to say an instant later "well, maybe I didn't feel warm then after all, just a little tense" etc. None of these qualitative variants matters a whit. It is that anything is being felt at all that is at issue here. Exotic data on priming and implicit processing don't have any bearing on this at all!
(One might put this by saying that there is a lot of unconscious mental activity but this is tendentious; to some, it might be held to beg the vexed question of whether people are briefly conscious of these evanescent and elusive topics, but just hugely and almost instantaneously forgetful of them.)
It begs the question, it changes the subject, and (frankly) calling genuinely unconscious cerebral events "mental" seems to me to be bending the language. Why don't we just use "mental" for the felt and "cerebral" for the unfelt, and then we can avoid yet another possibility of people's talking past one another needlessly?

(Unconscious mental processes, or unconscious thoughts, once dumped of their irrelevant semantic content -- semantic content which we know even the inert sentences on the pages of a book have -- always put me in mind of the incoherent notion of "unfelt feelings." I confess I've also always thought the Freudian type of "unconscious mind" was a redundant, superfluous, superstitious notion too, mais passons...)

Now faced with these failures of overlap people who believe they are conscious of more than is in fact going on in them, and people who do not believe they are conscious of things that are in fact going on in them heterophenomenology maintains a nice neutrality: it characterizes their beliefs, their heterophenomenological world, without passing judgment, and then investigates to see what could explain the existence of those beliefs.
But who cares about "beliefs" (which, as we know, can be ascribed to books and computers if we like)? What we wanted to know about was feelings. Not beliefs about feelings, but feelings. What is their functional role in a heterophenomenological theory?
Often, indeed typically or normally, the existence of a belief is explained by confirming that it is a true belief provoked by the normal operation of the relevant sensory, perceptual, or introspective systems. Less often, beliefs can be seen to be true only under some arguable metaphorical interpretation the subject claims to have manipulated a mental image, and we've found a quasi-imagistic process in his brain that can support that claim, if it is interpreted metaphorically. Less often still, the existence of beliefs is explainable by showing how they are illusory byproducts of the brains activities: it only seems to subjects that they are reliving an experience they've experienced before (deja vu).
But we are talking about feelings, not beliefs. Yes, the heteroph. experimentalist (E) needs to be able to infer, from the S's behavior (including what he says) what he is feeling. But let's set his methodological problems aside and grant that he'll sometimes be right and sometimes wrong (the E); the S can't be wrong about what his feeling feels-like (I'm with Descartes on that one), but that's neither here nor there either! We were not talking about either E or S being right or wrong about S's feeling. We were talking about S's feeling:

How/why does S feel, pray?

In this chapter we have developed a neutral method for investigating and describing phenomenology. It involves extracting and purifying texts from (apparently) speaking subjects, and using those texts to generate a theorists fiction, the subjects heterophenomenological world. This fictional world is populated with all the images, events, sounds, smells, hunches, presentiments, and feelings that the subject (apparently) sincerely believes to exist in his or her (or its) stream of consciousness. Maximally extended, it is a neutral portrayal of exactly what it is like to be that subject in the subjects own terms, given the best interpretation we can muster. . . . . People undoubtedly do believe that they have mental images, pains, perceptual experiences, and all the rest, and these facts the facts about what people believe, and report when they express their beliefs are phenomena any scientific theory of the mind must account for. (CE, p98)
Whatever they are believing about what they are feeling when they are feeling it, chances are they are right. But never mind; that's not the issue. Surely heteroph. is not just a lie-detector methodology: When do we get to the how/why of the feeling, the causal/functional explanation?
Is this truly neutral, or does it bias our investigation of consciousness by stopping one step short? Shouldn't our data include not just subjects subjective beliefs about their experiences, but the experiences themselves?
Well, that doesn't sound like our data (we being the E's, the 3rd person's) but S's data (the 1st person). But who cares? Include them if you like! Call S's toothache, the feeling, a part of your data-set too, if you like. We know that sometimes at least, toothache is indeed correlated with S's behavior. So what do we lose if we just suppose that it's always true. The correlation is perfect, 100%, and you, Dan, the hetero E, "own" both, the behavior and the feelings. They're both your data, in the hetero batch.

Now comes the hard part: How/why does S feel? (Please make sure your reply is not to some other question, which could be perfectly well answered without his feeling anything at all. Don't say "The function of the feeling is to draw his attention to..." -- because the reply will always be: Why/how does any of that "drawing" have to be a felt drawing, rather than just a drawing? Why/how does any of that attention have to be a felt attention, not just a "selective processing" etc. This is where the ineluctable difficulty lies, not in a hybrid data-base or in authenticating correlations/predictions between functional data and feelings.)

"Levine, a first-string member of the B Team, insists that conscious experiences themselves, not merely our verbal judgments about them, are the primary data to which a theory must answer." (Levine, 1994)
This is all just repetition. It's feeling that needs to be explained. That's all there is (or ever was) to it.

Levine seems to want to deny you the right to count the feelings themselves as part of your hybrid data-set. I'm allowing you. It won't help...

This is an appealing idea, but it is simply a mistake. First of all, remember that heterophenomenology gives you much more data than just a subjects verbal judgments; every blush, hesitation, and frown, as well as all the covert, internal reactions and activities that can be detected, are included in our primary data.
This is repetition of what came earlier...
But what about this concern with leaving the conscious experiences themselves out of the primary data?
Moot. I've given them to you, and it doesn't help. But bickering about this just changes the subject and defers the fatal function/feeling how/why question...
Defenders of the first-person point of view are not entitled to this complaint against heterophenomenology, since by their own lights, they should prefer heterophenomenologys treatment of the primary data to any other. Why? Because it does justice to both possible sources of non-overlap.
Who are the defenders of the 1st person view? what is that view? and what is being defended?

There are feelings. Only the feeler feels them. But you can count them as part of your data because they are real enough. Now what?

On the one hand, if some of your conscious experiences occur unbeknownst to you (if they are experiences about which you have no beliefs, and hence can make no "verbal judgments"), then they are just as inaccessible to your first-person point of view as they are to heterophenomenology.
If they are unfelt, they are not feelings, and hence not relevant to any of this! (Plenty of unfelt internal functions, from temperature regulation to perhaps semantic priming and blindsight: So what? They are not the problem! Feelings are!)
Ex hypothesi, you don't even suspect you have them--if you did, you could verbally express those suspicions. So heterophenomenology's list of primary data doesn't leave out any conscious experiences you know of, or even have any first-person inklings about. On the other hand, unless you claim not just reliability but outright infallibility, you should admit that some--just some--of your beliefs (or verbal judgments) about your conscious experiences might be wrong. In all such cases, however rare they are, what has to be explained by theory is not the conscious experience, but your belief in it (or your sincere verbal judgment, etc). So heterophenomenology doesn't include any spurious "primary data" either, but plays it safe in a way you should approve.
You've lost me. I don't for a minute doubt that eventually we will be able to do 100% mind-reading via functional correlates of feeling. So surely that's not at issue either. What will be left unexplained by this perfect predictability of feelings from their functional correlates is how/why there are feelings at all. The explanation, not the prediction! Back to square one. Let us not waste our time on veridicality, either S's or E's...
Heterophenomenology is nothing but good old 3rd-person scientific method applied to the particular phenomena of human (and animal) consciousness. I didn't invent the method; I merely described it, and explained its rationale. A bounty of excellent heterophenomenological research has been done, is being done, on consciousness. See, e.g., the forthcoming special issue of Cognition, edited by Stanislas Dehaene, on the cognitive neuroscience of consciousness.
The cognitive neuroscience of the functional correlates of consciousness (i.e. of feelings)...
It contains a wealth of recent experiments all conducted within the methodological strictures of heterophenomenology, whose resolutely third-person treatment of belief attribution squares perfectly with standard scientific method: when we assess the attributions of belief relied upon by experimenters (in preparing and debriefing subjects, for instance) we use precisely the principles of the intentional stance to settle what it is reasonable to postulate regarding the subjects beliefs and desires.
Yes, but are we making any progress on the how/why front...?
Now Chalmers has objected (in the debate) that this behavioristic treatment of belief is itself question-begging against an alternative vision of belief in which, for instance, having a phenomenological belief doesn't involve just a pattern of responses, but often requires having certain experiences. (personal correspondence, 2/19/01).
The "certain experiences" can be any feeling! Why is this simple, ubiquitous phenomenon, the only one at issue, really, treated as if it were a peekaboo piece of esoterica someone raises every now and then?

Just pick any feeling at all: pinch/ouch. That's all you need. The full-blown problem is there, even with an organism that has that feeling and that feeling only in its repertoire. Explain the how/why of that. The rest is just a ritual dance skirting around the question.

On the contrary, heterophenomenology is neutral on just this score, for surely we mustn't assume that Chalmers is right that there is a special category of phenomenological beliefs that there is a kind of belief that is off-limits to zombies but not to us conscious folks.
The issue with zombies is not "beliefs" (whatever those are: for Cummins a thermometer has beliefs, for some maybe a book-page or even a wall does); the issue with zombies is feelings, not beliefs (nor beliefs about feelings).
Heterophenomenology allows us to proceed with our catalogue of a subjects beliefs leaving it open whether any or all of them are Chalmers-style phenomenological beliefs or mere zombie-beliefs. (More on this later.) In fact, heterophenomenology permits science to get on with the business of accounting for the patterns in all these subjective beliefs without stopping to settle this imponderable issue. And surely Chalmers must admit that the patterns in these beliefs are among the phenomena that any theory of consciousness must explain.
I think talking about beliefs is just heading us back into empty equivocation.
Lets look at a few cases of heterophenomenology in action. [Demo of Ramachandrans example of motion capture under isoluminance.] Do you see the motion? You see apparent motion. Does the yellow blob really move? The blob on the screen doesn't move. Ah, but does the subjective yellow blob in your experience move? Does it really move, or do you just judge that it moves? Well, it sure seems to move! That is what you judge, right?
There's something it feels like. It doesn't matter what! That it feels like anything at all is the bad news (for functionalists).
You are not authoritative about what is happening in you, but only about what seems to be happening in you,
Let me put it another way: seems = feels-like

And it's not the "authority" about whether it feels-like this or feels-like that that matters. (I happen to accept that this is incorrigible, but it doesn't matter; the same punchline would apply if I thought feelings were plastic, ambiguous or fallible.) What matters is that anything is felt at all! (How/why?)

and we are giving you total, dictatorial authority over the account of how it seems to you, about what it is like to be you. And if you complain that some parts of how it seems to you are ineffable, we heterophenomenologists will grant that too. What better grounds could we have for believing that you are unable to describe something than that (1) you don't describe it, and (2) confess that you cannot? Of course you might be lying, but well give you the benefit of the doubt.(CE, p96-7)
And who cares? What is at issue here? All this is alright for a lie-detection psychophysiology project, "polygraph science," but we were supposed to be discussing the "hard one"...
Is there anything about your experience of this motion capture phenomenon that is not explorable by heterophenomenology? Id like to know what.
I don't care! I don't care if every nook and cranny, every last JND of my feeling life is correlated with and hence detectable and predictable from something you can pick up on your polygraph screen or can infer from my behavior. That's not the question! The question is: How/why does anyone/anything feel at all?
This is a fascinating and surprising phenomenon, predicted from the 3rd-person point of view, and eminently studiable via heterophenomenology. (Tom Nagel once claimed that 3rd-person science might provide us with brute correlations between subjective experiences and objective conditions in the brain, but could never explain those correlations, in the way that chemists can explain the correlation between the liquidity of water and its molecular structure.
And I agree with Tom completely (but with reasons: the impossibility of giving a nontelekinetic causal/functional explanation of feeling).
I asked him if he considered the capacity of industrial chemists to predict the molar properties of novel artificial polymers in advance of creating them as the epitome of such explanatory correlation, and he agreed that it was. Ramachandran and Gregory predicted this motion capture phenomenon, an entirely novel and artificial subjective experience, on the basis of their knowledge of how the brain processes vision.)
Dear Dan, you keep giving examples of successful prediction of functions from functions, and then an overall causal/functional explanation of the correlation. But when feeling rather than function is what is being predicted, all progress stops with the prediction. There are no further steps to be taken; only regression back to the functional explanation of the functional correlates.
See next Rensinks change blindness. [Demo] (By the way, this is an effect I predicted in CE, much to the disbelief of many readers. )
Since the feeling/function correlation is 100%, it follows that 100% predictability is possible. So what? (This is not to detract from your own predictions. But those are the "easy" questions of ordinary science [in the case of heteroph., it is lie-detector psychophysics/psychophysiology]. But here, we are aspiring to the "hard" question, where your predictions, commendable as psychophysics, are unfortunately no help.
Were your qualia changing before you noticed the flashing white cupboard door? You saw each picture several dozen times, and eventually you saw a change that was swift and enormous (Dennett, 1999, Palmer, 1999) but that swift, enormous change was going on for a dozen times and more before you noticed it. Does it count as a change in color qualia?
Only felt feelings count. If I didn't feel it at the time, I didn't feel it. (I might have forgotten it afterward, but that's another story.) All this stuff about the "penetrability" and describability (and plasticity and ambiguity) of feelings is interesting, but irrelevant...
The possible answers:

A.Yes. B. No. C. I don't know (1) because I now realize I never knew quite what I meant by qualia all along. (2) because although I know just what I have always meant by qualia, I have no first-person access to my own qualia in this case. (a) and 3rd-person science cant get access to qualia either!

Never mind "qualia." Just call them feelings. I can misremember, I can misdescribe, but whatever I felt, I felt. Whatever that feeling felt-like (not how I remember or describe it, but how it felt at the time) is what we are talking about here, and not even how it felt, but that it felt like anything at all. That is the warp and the woof of all this. Explain the how/why of that that and you have won me over to your team. Till then, it all just sounds like beating about the bush...
Lets start with option C first. Many people discover, when they confront this case, that since they never imagined such a phenomenon was possible, they never considered how their use of the term qualia should describe it. They discover a heretofore unimagined flaw in their concept of qualia rather like the flaw that physicists discovered in their concept of weight when they first distinguished weight from mass. The philosophers concept of qualia is a mess. Philosophers don't even agree on how to apply it in dramatic cases like this. I hate to be an old I-told-you-so but I told you so (Quining Qualia). This should be at least mildly embarrassing to our field, since so many scientists have recently been persuaded by philosophers that they should take qualia seriously only to discover that philosophers don't come close to agreeing among themselves about when qualia whatever they are are present. (I have noticed that many scientists who think they are newfound friends of qualia turn out to use the term in ways no self- respecting qualophile will countenance.)
I haven't seen this demo, but it sounds like it's about how you would remember or describe what you felt at the time. Irrelevant. All that matters is that you felt something at all, be it ever so skittish, malleable, ambiguous, or whatever other fallibility you want to ascribe to it. It is a feeling, and that's a fatal liability for a functionalist...
But although some philosophers may now concede that they aren't so sure what they meant by qualia all along, others are very sure what concept of qualia they've been using all along, so lets consider what they say. Some of them, I have learned, have no problem with the idea that their very own qualia could change radically without their noticing.
On-line, in-future, retrospectively? The last sounds like reinterpreting the same feeling (because I'm as unready to believe in backwards causation as in telekinesis...)
They mean by qualia something to which their 1st-person access is variable and problematic. If you are one of those, then heterophenomenology is your preferred method, since it,
Preferred method for what? Lie-detection, mind-reading? Fine. But for explaining the how/why of feeling?
unlike the first-person point of view, can actually study the question of whether qualia change in this situation. It is going to be a matter of some delicacy, however, how to decide which brain events count for what. In this phenomenon of change blindness for color changes, for instance, we know that the color-sensitive cones in the relevant region of your retina were flashing back and forth, in perfect synchrony with the white/brown quadrangle, and presumably (we should check) other, later areas of your color vision system were also shifting in time with the external color shift. But if we keep looking, we will also presumably find yet other areas of the visual system that only come into synchrony after you've noticed. (such effects have been found in similar fMRI studies, eg. OCraven et al. 1997).
All standard functionalism. What's the prob?
The hard part will be deciding (on what grounds?) which features of which states to declare to be qualia and why.
No, the hard part will be explaining why there are any feelings involved with all this at all...
I am not saying there cant be grounds for this. I can readily imagine there being good grounds, but if so, then those will be grounds for adopting/endorsing a 3rd- person concept of qualia (cf. the discussion of Chase and Sanborn in Dennett, 1988, or the beer- drinkers in CE 395-6). The price you have to pay for obtaining the support of 3rd-person science for your conviction about how it is/was with you is straightforward: you have to grant that what you mean by how it is/was with you is something that 3rd-person science could either support or show to be mistaken.
No problem. Science can read my mind better than I can, can predict better than I can, can reinterpret my feelings for me better than I can. Who cares? What this lie-detector functional-correlate science cannot do is explain how/why I feel at all...
Once we adopt any such concept of qualia, for instance, we will be in a position to answer the question of whether color qualia shift during change blindness. And if some subjects in our apparatus tell us that their qualia do shift, while our brainscanner data shows clearly that they don't, well treat these subjects as simply wrong about their own qualia, and well explain why and how they come to have this false belief.
I have no problem with that -- but what does it have to do with the hard question...?
Some people find this prospect inconceivable. For just this reason, some people may want to settle for option B: No, my qualia don't changecouldnt change until I notice the change. This decision guarantees that qualia, tied thus to noticing, are securely within the heterophenomenological worlds of subjects, are indeed constitutive features of their heterophenomenological worlds. On option B, what subjects can say about their qualia fixes the data.
You keep talking about "which feeling?" questions (which can always be answered by good, robust "3rd person" functional correlates, possibly better than they can be answered by the 1st person in question). So what? I am not asking about which-feeling questions, but about feeling simpliciter...
By a process of elimination, that leaves option A, YES, to consider. If you think your qualia did change (though you didn't notice it at the time) why do you think this? Is this a theory of yours? If so, it needs evaluation like any other theory. If not, did it just come to you? A gut intuition? Either way, your conviction is a prime candidate for heterophenomenological diagnosis: what has to be explained is how you came to have this belief. The last thing we want to do is to treat your claim as incorrigible. Right?
This is just lie-detector science (forensic neuroscience)...
Here is the dilemma for the B Team, and Captain Chalmers. If you eschew incorrigibility claims, and especially if you acknowledge the competence of 3rd-person science to answer questions that cant be answered from the 1st-person point of view, your position collapses into heterophenomenology.
What position? What proposition is being traded for what proposition? One loses sight of what is at issue...
The only remaining alternative, C(2a), is unattractive for a different reason. You can protect qualia from heterophenomenological appropriation, but only at the cost of declaring them outside science altogether. If qualia are so shy they are not even accessible from the 1st-person point of view, then no 1st-person science of qualia is possible either.
This sounds like it's verging on unfelt feelings again...
I will not contest the existence of first-person facts that are unstudiable by heterophenomenology and other 3rd-person approaches. As Steve White has reminded me, these would be like the humdrum inert historical facts I have spoken of elsewhere like the fact that some of the gold in my teeth once belonged to Julius Caesar, or the fact that none of it did. One of those is a fact, and I daresay no possible extension of science will ever be able to say which is the truth. But if 1st-person facts are like inert historical facts, they are no challenge to the claim that heterophenomenology is the maximally inclusive science of consciousness.
Lie-detector science is like weather forecasting, except without the possibility of understanding the causal basis for the predictions and the correlations...
2. David Chalmers as a Heterophenomenological Subject

Of course it still seems to many people that heterophenomenology must be leaving something out. Thats the ubiquitous Zombic Hunch. How does the A team respond to this? Very straightforwardly: by including the Zombic Hunch among the heartfelt convictions any good theory of consciousness must explain. One of the things that it falls to a theory of consciousness to explain is why some people are visited by the Zombic Hunch. Chalmers is one such, so lets look more closely at the speech acts Chalmers has offered as a subject of heterophenomenological investigation.

The Zombic hunch sounds equivocal. What is it?

That one believes that there could actually be Turing-Indistinguishable, insentient Zombies? That's sci-fi speculation. Not worth thinking about.

That one does not understand how and why we are not such feelingless Zombies? That sounds like a very fair observation!

Here is Chalmers definition of a zombie (his zombie twin):

Molecule for molecule identical to me, and identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely . . . he is embedded in an identical environment. He will certainly be identical to me functionally; he will be processing the same sort of information, reacting in a similar way to inputs, with his internal configurations being modified appropriately and with indistinguishable behavior resulting. . . . he will be awake, able to report the contents of his internal states, able to focus attention in various places and so on. It is just that none of this functioning will be accompanied by any real conscious experience. There will be no phenomenal feel. There is nothing it is like to be a Zombie. . . 1996, p95

This is empty sci-fi. Either such a Zombie is possible, or, much more likely, it is not. Either way, I have no idea how/why.

(Note that I never make a positive speculation about the possibility of Zombies: I simply declare a frank bankruptcy when it comes to explaining how or why we are not Zombies... Replying that Zombies are impossible, but not saying how/why, is just no help...)

Notice that Chalmers allows that zombies have internal states with contents, which the zombie can report (sincerely, one presumes, believing them to be the truth);
This is equivocal. A sentence in a book can be true, but it cannot be "sincere" [on the part of the book] and it cannot be "believed" [by the book]. Same is true for dynamic, on-line books. Same is true for Zombies: No feelings of sincerity, or credence, or anything...

Just sentences, which may or may not be true. And other sentences, which may or may not be interpretable (by someone) as implying the truth of the former sentences.

Let's give the Zombies their due (but no more than their due)...

these internal states have contents, but not conscious contents, only pseudo-conscious contents. The Zombic Hunch, then, is Chalmers conviction that he has just described a real problem. It seems to him that there is a problem of how to explain the difference between him and his zombie twin.
Too complicated! The only issue is whether or not the Zombie feels!
The justification for my belief that I am conscious lies not just in my cognitive mechanisms but also in my direct evidence [emphasis added]; the zombie lacks that evidence, so his mistake does not threaten the grounds for our beliefs. (One can also note that the zombie doesn't have the same beliefs as us, because of the role that experience plays in constituting the contents of those beliefs.) (Reply to Searle)
Beliefs schmeliefs! The "Zombie", ex hypothesi, does not feel. It is true of him that he does not feel. We do feel. It is true of us that we feel. That's the difference, regardless of whether or not there can actually be such Zombies...
This speech act is curious, and when we set out to interpret it, we have to cast about for a charitable interpretation. How does Chalmers justification lie in his direct evidence? Although he says the zombie lacks that evidence, nevertheless the zombie believes he has the evidence, just as Chalmers does.
Equivocation on belief again. I find the notion of an entity that does not feel but "believes" it feels to be as coherent as the notion of an unfelt feeling (and for much the same reason). And yes, I freely confess that to me a real belief differs from an as-if belief (i.e., a dynamical state or a physical string of symbols that is systematically interpretable by an external interpreter as a belief) only in that there must be something it feels-like to have a real belief. (So go ahead and shoot me...)


Chalmers and his zombie twin are heterophenomenological twins: when we interpret all the data we have, we end up attributing to them exactly the same heterophenomenological worlds. Chalmers fervently believes he himself is not a zombie. The zombie fervently believes he himself is not a zombie. Chalmers believes he gets his justification from his direct evidence of his consciousness. So does the zombie, of course.
The (hypothetical) Zombie does not "fervently" anything, because he does not feel! He only behaves in a way that is interpretable (by us) as if he felt. If there can indeed be such a Zombie, the how/why difference under discussion would be that difference between actually feeling and merely functioning-as-if-feeling. If there cannot be such a Zombie, then you need to explain, causally/functionally, exactly how/why there cannot.
The zombie has the conviction that he has direct evidence of his own consciousness, and that this direct evidence is his justification for his belief that he is conscious.
No, the hypothetical Zombie has functional states and behaviors that are identical to those of a feeling person and are systematically interpretable (ex hypothesi) as expressing a conviction. But no conviction is expressed because no conviction is felt. All you have is the functional correlates of a felt conviction.
Chalmers must maintain that the zombies conviction is false.
Chalmers can say that without even having to look at the value on his lie-detector for that Zombie: It is false ex hypothesi. If a Zombie says "I am feeling a toothache now," either he is lying, or he is not a Zombie!
He says that the zombie doesn't have the same beliefs as us because of the role that experience plays in constituting the contents of those beliefs, but I don't see how this can be so. Experience (in the special sense Chalmers has tried to introduce) plays no role in constituting the contents of those beliefs, since ex hypothesi, if experience (in this sense) were eliminated if Chalmers were to be suddenly zombifiedhe would go right on saying what he says, insisting on what he now insists on, and so forth. Even if his phenomenological beliefs suddenly ceased to be phenomenological beliefs, he would be none the wiser. It would not seem to him that his beliefs were no longer phenomenological.
Frankly, this pseudo-puzzle looks like it's just a consequence of the highly counterfactual premise: To suppose that something that is molecule-for-molecule identical to me could fail to have feelings sounds about as sensible as to suppose that something that was molecule-for-molecule identical to the moon could fail to have gravity.

The trouble is, that in the moon's case it is easy to explain causally/functionally, exactly how and why this was so unlikely, whereas in the case of my functional clone, my Turing doppelganger (or myself, for that matter), it is (I think intractably) hard...

So let's not invent hypothetical Zombie doppelgangers like this, but instead simply ask for a how/why explanation of the fact that any of us is not a Zombie (Harnad 1995) .

But wait, I am forgetting my own method and arguing with a subject! As a good heterophenomenologist, I must grant Chalmers full license to his deeply held, sincerely expressed convictions and the heterophenomenological world they constitute. And then I must undertake the task of explaining the etiology of his beliefs. Perhaps Chalmers beliefs about his experiences will turn out to be true, though how that prospect could emerge eludes me at this time. But I will remain neutral. Certainly we shouldn't give them incorrigible status. (He's not the Pope.) The fact that some subjects have the Zombic Hunch shouldn't be considered grounds for revolutionizing the science of consciousness.
This is just carping at the details of far-fetched counterfactuals. I suggest focussing on the actuals: How/why do we feel? How/why are we not Zombies?
3. Where's the Program?

To prove a priori, from ones ivory tower, a metaphysical fact that forces a revolution in the sciences.

Nothing to prove. No metaphysics whatsoever at issue. A simple methodological (hence also epistemic) point about the constraints on causal/functional explanation: It works for everything else, but it doesn't work for feelings...
The Zombic Hunch is accompanied by arguments designed to show that it is logically possible (however physically impossible) for there to be a zombie.
I've always hated that line of argument. I hope I have distanced myself from all that counterfactual sci-fi and formulated the "Zombic Challenge" (it's not a hunch about existence, it's a challenge to explain how/why not) in a more sensible way.
This logical possibility is declared by Chalmers to have momentous implications for the scientific study of consciousness, but as a candidate for the Philosophers Dream it has one failing not shared with either Einsteins or Matthews great ideas: it prescribes no research program. Suppose you are convinced that Chalmers is right. Now what? What experiments would you do (or do differently) that you are not already doing? What models would you discard or revise, and what would you replace them with? And why?
If you think feelings are causally/functionally explainable, explain them (or show how it is possible to explain them). If not, then it follows that what you recommend doing (which is "heterophenomenological" data collection, lie-detection, and Zombie functional explanation for it all) is all that we have left anyway.

This is what Turing should have concluded in the first place. Nothing metaphysical about it:

"This paper is accordingly not about what Turing (1950) may or may not have actually thought or intended. It is about the implications of his paper for empirical research on minds and machines. So if there is textual or other evidence that these implications are at odds with what Turing actually had in mind, I can only echo the last line of Max Black's (1952) doubly apposite essay on the "Identity of Indiscernibles," in which the two opposing metaphysical viewpoints (about whether or not two things that there is no way to tell apart are in reality one and the same thing) are presented in the form of a dialogue between two interlocutors. Black effects to be even-handed, but it is obvious that he favours one of the two. The penultimate line, from the unfavoured one, is something like `Well, I am still not convinced'; the last line, from the favoured one, `Well, you ought to be.'"

Chalmers has recently addressed this very issue in a talk entitled First-Person Methods in the Science of Consciousness (Consciousness Bulletin, Fall 1999, and on Chalmers website), but I hunt through that essay in vain for any examples of research that are somehow off limits to, or that transcend, heterophenomenology:

You are closer to David than I am, as I said. You both have what you consider to be positive empirical programs for solving the hard problem -- you Dan, by showing that it is a nonproblem, and David, by showing that there are ways to study the laws of consciousness despite the hard constraints. Whereas I just call a methodological spade the spade it is: There is no accounting for feelings functionally. Period. Back to Turing modeling and heterophenomenological weather-forecasting...
[Dave:] I take it for granted that there are first-person data.
For Dave: data/schmata. Why not just say: "Everyone feels." Nothing fancy, just everyone feels.
[Dave:] It's a manifest fact about our minds that there is something it is like to be us - that we have subjective experiences - and that these subjective experiences are quite different at different times.
"We feel. And feelings vary."
[Dave:] Our direct knowledge of subjective experiences stems from our first-person access to them.
"We feel."
[Dave:] And subjective experiences are arguably the central data that we want a science of consciousness to explain. [emphases added]
"How/why do we feel?"
[Dave:] I also take it that the first-person data can't be expressed wholly in terms of third-person data about brain processes and the like.
"Causal/functional explanations do not explain feeling."
[Dave:] There may be a deep connection between the two - a correlation or even an identity - but if there is, the connection will emerge through a lot of investigation, and can't be stipulated at the beginning of the day [emphasis added].
Here I part company with Dave: I think only further correlations will "emerge." I don't know what kind of "connection" Dave has in mind, but if it's causal (and not telekinetic), I'd forget about it! No other candidate functional role seems viable either.
[Dave:] That's to say, no purely third-person description of brain processes and behavior will express precisely the data we want to explain, though they may play a central role in the explanation. So as data, the first-person data are irreducible to third-person data.
Word-surplus again: "The functional explanation will not explain the feeling."
Notice how this passage blurs the distinctions of heterophenomenology. Arguably? I have argued, to the contrary, that subjects beliefs about their subjective experiences are the central data. I've reviewed these arguments here today. So, is Chalmers rejecting my arguments? If so, what is wrong with them?
I've lost sight of what is at issue. I don't think Dave would deny that you could predict everything hetero-style; I hope he would deny you could explain everything that way. But mostly, neither the hetero methodology nor the Zombie hunch is relevant to the actual point at issue, which is the hard problem of the causal/functional explanation of feeling.
I agree with him that a correlation or identity or indeed, the veracity of a subjects beliefs--cant be stipulated at the beginning of the day. That is the neutrality of heterophenomenology. It is Chalmers who is holding out for an opening stipulation in his insistence that the Zombic Hunch be granted privileged status. As he says, he takes it for granted that there are first-person data. I don't. Not in Chalmers charged sense of that term.
All the more reason to drop the abstruse language of "first-person data": Do you deny (1) that people feel? (Presumably you do not.) Do you affirm (2) that there can be a causal/functional explanation of feeling? (If so, what is it?) Those are the real issues. The rest of the stuff just obscures them.
I don't stipulate at the beginning of the day that our subjective beliefs about our first-person experiences are phenomenological beliefs in a sense that requires them somehow to depend on (but not causally depend on) experiences that zombies don't have! I just stipulate that the contents of those beliefs exhaustively constitute each persons (or zombies) subjectivity.
All those words: beliefs, subjective beliefs, phenomenological beliefs, experiences, first-person experiences, subjectivity....

"We feel! How/why?" Your move!

In his paper on first-person methods, Chalmers sees some of the problems confronting a science of consciousness:

When it comes to first-person methodologies, there are well-known obstacles: the lack of incorrigible access to our experience; the idea that introspecting an experience changes the experience; the impossibility of accessing all of our experience at once, and the consequent possibility of "grand illusions"; and more. I don't have much that's new to say about these. I think that could end up posing principled limitations, but none provide in-principle barriers to at least initial development of methods for investigating the first-person data in clear cases.

Dave is optimistic (for some reason) about "first-person" science. God knows why, or what he expects to come out of it. But the skittishness of experience is not the real obstacle. Even if it were more reliable, it wouldn't help; other data-domains could even be more skittish. That's not where the problem lies...
Right. Heterophenomenology has already made the obligatory moves, so he doesn't need to have anything new to say about these. I don't see anything in this beyond heterophenomenology. Do you? Chalmers goes on:
I'm inclined to agree. You both have positive programs that you both believe bear on the "hard" problem -- yours, Dan to debunk it and replace it by the real goods, Dave's to supplement it with some other stuff (other stuff that you, Dan, have probably subsumed in your hybrid hetero-bin, I agree).

So what we have in you two is a pair of optimists.

Bid welcome to a pessimist (with reasons, and pretty straightforward ones, even though they have not yet been getting much attention in the consciousness press, hetero or otherwise...)

When it comes to first-person formalisms, there may be even greater obstacles: can the content of experience be wholly captured in language, or in any other formalism, at all?
Equivocal. Is it describable in words? Church/Turing would argue yes. Analog/digital gap -- a picture-is-worth-a-thousand-words -- proponents would argue no. Incommensurability proponents would argue no even more loudly. But that every last JND of feeling could be predicted and symbolized is also a viable possibility.

Yet the fact is that nothing profound hangs on this one way or the other...

Many have argued that at least some experiences are "ineffable". And if one has not had a given experience, can any description be meaningful to one? Here again, I think at least some progress ought to be possible. We ought at least to be able to develop formalisms for capturing the structure of experience: similarities and differences between experiences of related sorts, for examples, and the detailed structure of something like a visual field.
And, by the way, this is precisely where my own symbol grounding research comes in (Harnad 1990, Cangelosi & Harnad 2000).
If Chalmers speaks of anything in this paper (remember, it is entitled First-person Methods in the Science of Consciousness) that is actually distinct from 3rd-person heterophenomenology, I don't see what it is.
I think I more or less agree with you...
Both there and in his contribution to our debate he mentioned various ongoing research topics that strike him as playing an important role in his anticipated 1st-person science of consciousness work on blindsight and masking and inattentional blindness, for instance but all this has long ago been fit snugly into 3rd-person science.
In the debate, Chalmers asserted that a heterophenomenological methodology would not be able to motivate questions about what was going on in consciousness in these phenomena. That is utterly false, of course; these very phenomena were, after all, parade cases for heterophenomenology in Consciousness Explained. It is important to remember that the burden of heterophenomenology is to explain, in the end, every pattern discoverable in the heterophenomenological worlds of subjects; it is precisely these patterns that make these phenomena striking, so heterophenomenology is clearly the best methodology for investigating these phenomena and testing theories of them.
Predict every JND of both action and feeling -- but not explain why or how any of it is actually felt...
I find it ironic that while Chalmers has made something of a mission of trying to convince scientists that they must abandon 3rd-person science for 1st-person science, when asked to recommend some avenues to explore, he falls back on the very work that I show-cased in my account of how to study human consciousness empirically from the 3rd-person point of view. Moreover, it is telling that none of the work on consciousness that he has mentioned favorably addresses his so-called Hard Problem in any fashion; it is all concerned, quite appropriately, with what he insists on calling the easy problems. First-person science of consciousness is a discipline with no methods, no data, no results, no future, no promise. It will remain a fantasy.
I agree. Turing was right (or ought to have been). Do the do-able, and explain the explicable; and foreswear the undo-able and the inexplicable...
Yes, if you are careful to define consciousness so that nothing behavioral can bear on it, you get to declare that consciousness transcends behaviorism without fear of contradiction.
No declarations needed. Anyone careful will agree that doing and feeling are not the same thing...
I simply say that invoking consciousness is not necessary to explain actions; there will always be a physical explanation that does not invoke or imply consciousness. A better phrase would have been explanatorily superfluous, rather than explanatorily irrelevant. (Chalmers second reply to Searle, on his website)
But in fact this is bad news: It reinforces what I've been saying both about the superfluousness of feelings, functionally, and the corresponding impossibility of explaining them functionally.

[These are some further comments after hearing an oral version of the paper at LSE on June 7 2001]

I think there needs to be a C Team, because I, for one, know I don't fit on either A or B.

I am not A, because I definitely believe, and argue (vigorously and rigorously) that the "hard problem" (perhaps a misnomer too -- but for me, the problem of giving a causal/functional explanation of feeling: explaining how and why we feel) has not been solved, and is insoluble.

I am not B, because I don't believe (and never have believed) in the "Zombic Hunch" -- if that hunch is that there could actually be a pair of entities of the same kind, molecularly and functionally, but one of them feels and the other doesn't. That's about as likely (to me) as a body with mass but without gravity. I also don't believe the weaker version of this: that there could be a functionally equivalent but molecularly different pair of entities, and one of them feels and the other doesn't. (In other words, I believe in the Turing Test, though I recognize that neither Turing Equivalence nor Strong Equivalence is as firmly founded as molecular/functional identity when it comes to the probability of feelings.)

And I am also not B, because I have never for a nanosecond believed that an alternative "science of consciousness" was possible. Turing Testing and modeling -- behavioral, physiological and functional/computational -- is the only "method" there is. Call it heterophenomenology if you like (verbal report and behavioral/physiological correlates are as good a way to ascertain what feelings are taking place as any we may want).

But the name of the game is not just inferring and describing feelings, but explaining them. And explaining them is not merely predicting under what conditions they will occur, nor even predicting what they will feel like. That's all easy stuff (by which I just mean normal science). The hard part is (and always has been) this: Suppose you have a successful causal mechanism (be it molecular, synthetic, or computational -- let's not quibble, it doesn't matter) for predicting feelings, including all the functional conditions and states in which they occur, right down to the last reportable JND, in every conceivable situation.

You will still not be able to give even a hint of a hint as to how it is that that mechanism feels at all (you'll just have the molecular or computational mechanism that correlates with the feelings), nor of why (in, say a Darwinian, or some other functional sense) it feels.

Or, as I prefer to put it (without the slightest commitment to belief in a "Zombic Hunch"): you will be unable to give even a hint of a hint as to how and why a mechanism with precisely the structural and functional properties you have correctly and completely divined in your successful causal mechanism, should be feeling at all (rather than simple doing everything it is so very capable of doing -- behaviorally, physiologically, computationally), but without feeling anything.

In other words, the Hard Problem is simply explaining how and why we are not Zombies! (This does not require us to believe that Zombies are possible. Maybe they are, maybe they are not. Nolo contendere. Explaining how/why they are impossible would be even harder than the hard problem. I'll settle for just an explanation of how/why they are not actual in our own case.)

None of this stuff about heterophenomenology helps one bit with answering that hard question. It only concerns easy questions, such as how good experimenters/theoreticians can be at mind-reading, how good subjects can be at mind-describing -- and I'm ready to grant that both experimenters and subjects can be as good as you like, as good as any cognitive science could need, right down to the last JND!

But that still won't tell you how/why JNDs feel like something -- though, given that they do, mysteriously, feel like something, it will explain why they feel like this rather than like that. But that's an easy question again; it presupposes, or "brackets" [to use your Husserlian phrase] the answer to the hard question of how/why any of these excellent functional correlates/substrates of behaving feel like anything at all (rather than just functioning, i.e., doing, zombily).

So the hard problem (of how/why we are not zombies) is the "feeling/function" problem, or, even more directly, the "feeling/doing" problem: How/why does it feel like something to have (or to be!) certain functional powers? Although that sounds superficially like asking "How/why does gravity pull?" it isn't, because pulling is gravity, but feeling is not doing. (It's function/function in the first case, feeling/function in the second.)


I would add only two other things. First, that "belief" is a weasel-word. This is controversial and (based on the resistance I have encountered over the years) probably original with me:

"I believe that X"

is no different from a sentence on paper or on screen, or implemented dynamically as a computer state -- unless there is something it feels-like to have that belief. If you don't feel, "you" [I hesitate to use the animate 2nd person to refer to a Zombie, I should really say "it"] don't have beliefs, "you" merely have (meaningfully interpretable) internal sentences (or internal states that are interpretable as sentences are).

So, to take your example of change-blindness and incorrigibility:

If I look, repeatedly, at the stimulus you showed us, and it feels-like there is no change, I am saying something absolutely (and incorrigibly, unless I'm delirious and my mind's wandering between the feeling and the report) true when I say it is not changing. There is no doubt something changing in the stimulus, and probably something changing in my brain. And maybe you could ask me to do something else later that reflects (as in masking) the fact that my brain did indeed detect the change, even though I did not feel it. But that does not mean I was wrong about what I felt (if I described it accurately, and didn't forget or get confused in my report).

Eventually I do feel the change. Does that mean I did feel it before, but didn't "realize" it? What on earth does that mean? In plain English, it means I allegedly "felt" it before, but didn't "feel I felt it." (I'm leaving out the word "know", which is as much a weasel-word as "believe" with respect to feelings) Or maybe I "felt" it "unconsciously" (unfelt feelings! who felt them, then? a Freudian alter-consciousness? wouldn't that generate an extra mind/body problem? forcing me to ask, then, a second hard question, namely: "How/why is my alter ego/id [the one feeling my own unfelt feelings] not a Zombie either?") Occam would be really exercised by all this...

Nonsense. What I didn't feel then (and didn't merely forget I felt), I didn't feel, and nothing else felt it either! So neither change-blindness nor backward masking shows that feelings are fallible qua feelings. (They are of course fallible as correlates or indicators of what they supposedly tell us of the world, but that isn't the issue we are discussing here.)

But it is qua feelings that feelings have to be explained (if we are to face the hard question), rather than their being smuggled in inside a representational wrapping. ("Representation" is, like "belief" and "knowledge," a weasel word, 100% equivocal in relation to the real problem, feeling.)

And last, about neuroscientist Mary: What I said at the table was that the Mary example, and all examples like it, are trivial: They're just trading on feeling-recombinatorics ("Lady, we've already established your profession. We're just haggling about the price"). Mary's color-blind. She uses her existing (black/white) feelings plus her functional knowledge, grounded in those feelings, to infer what colors feel like. There is nothing profound at issue there, whether or not she is "surprised" at whether her prior inference conforms to her subsequent newfound color experience. These are just recombinatory games. Can I exactly image you, after you have left the room, so that when you come back in, there is no discrepancy between what I "expected" and what I saw? Who cares? These are just questions about the ability of feeling systems to infer and imagine, by combining and recombining their JNDS.

The real question is: How/why our JNDs feel like anything at all, not whether, given some JNDs, one could correctly infer and image other JNDs! The gap, as always, is between non-feeling and feeling, not between this-feeling and that-feeling.

And I've said all of this using only the f-word, never the q-word (which, I agree with the A-team, is ambiguous and unnecessary: the f-word says it all).

Now well and truly amen.


Cangelosi, A. & Harnad, S. (2000) The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in PerceptualCategories. Evolution of Communication (Special Issue on Grounding)

Harnad, S. (1982) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47.

Harnad, S. (1991) "Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem"Minds and Machines 1: 43-54.

Harnad, S. (1990) The Symbol Grounding Problem Physica D 42: 335-346.

Harnad, S. (unpub. ms.) There Is Only One Mind/Body Problem. (Presented at Symposium on the Perception of Intentionality, XXV World Congress of Psychology, Brussels, Belgium, July 1992) International Journal of Psychology 27: 521 (Abstract) http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnadXX.one.mind.body.problem.html

Harnad, S. (1995) What Thoughts Are Made Of. Nature 378: 455-456. Book Review of: Churchland, PM. (1995) The Engine of Reason, the Seat of the Soul: A Philosophical Journey into the Brain (MIT Press) and Greenfield, SA (1995) Journey to the Centers of the Mind. (Freeman)

Harnad, Stevan (1995) "Why and How We Are Not Zombies. Journal of Consciousness Studies1:164-167.

Harnad, S. (1996) What to Do About Feelings? [Published as "Conscious Ecumenism" Review of PSYCHE: An Interdisciplinary Journal of Research on Consciousness] Times Higher Education Supplement. June 7 1996, P. 29.

Harnad, S. (1998) Hardships of Cognitive Science. Review of J. Shear (Ed.) Explaining Consciousness (MIT/Bradford 1997) Trends in Cognitive Sciences 2(6): 234-235.

Harnad, S. (2000) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language, and Information 9(4): 425-445. (special issue on "Alan Turing and Artificial Intelligence") http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad00.turing.html

Harnad, S. (2000a) Correlation Vs. Causality: How/Why the Mind/Body Problem Is Hard. [Invited Commentary of Humphrey, N. "How to Solve the Mind-Body Problem"] Journal of Consciousness Studies 7(4): 54-61.

Harnad, S. (2001) No Easy Way Out. The Sciences 41(2) 36-42.
(Original longer version "Explaining the Mind: Problems, Problems"
http://www.cogsci.soton.ac.uk/~harnad/Tp/bookrev.htm )

Harnad, S. (in press) Turing Indistinguishability and the Blind Watchmaker. In: J. Fetzer & G. Mulhauser, (eds.) Evolving Consciousness Amsterdam: John Benjamins (in press) http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad98.turing.evol.html