“Unconscious Feeling” vs. “Unfelt Consciousness”: Detecting the Differences (Reply to David Rosenthal-2)

(Reply to David Rosenthal-2)

DR:We need an argument – not just an assertion – that the states we call feelings can’t occur without being conscious (i.e., without being felt).

I agree that an argument is needed, but I’m not sure whether it’s the affirmer or the denier who needs to make the argument!

First, surely no argument is needed for the tautological assertion that feelings and felt states have to be felt: (having) an unfelt feeling or (being in) an unfelt felt-state is a contradiction.

The more substantive assertion is that all unfelt states are unconscious states and all felt states are conscious states (i.e., feeling = consciousness). And its denial would be either that (1) no, there can be states that are unfelt, yet conscious, or (2) no, there can be states that are unconscious yet felt (or both).

I would say the burden, then, is on the denier, either (1) to give examples of unfelt states that are nevertheless conscious states — and to explain in what sense they are conscious states if it does not feel like anything to be in those states — or (2) to give examples of unconscious states that are nevertheless felt states — and to explain who/what is feeling them if it is not the conscious subject — (or both).

(It will not do to reply, for (1), that the subject in the conscious state in question is indeed awake and feeling something, but not feeling what it feels like to be in that state. That makes no sense either. Nor does it make sense to reply, for (2), that the feeling is being felt by someone/something other than the conscious subject.)

What you have in mind, David, I know, is things like “unconscious perception” and blindsight. But what’s meant by “unconscious perception” is that the subject somehow behaves as if he had seen something, even though he is not conscious of having seen it. For example, he may say he did not see a red object presented with a masking stimulus, and yet respond more quickly to the word “dead” than to “glue” immediately afterward (and vice versa if the masked object was blue).

Well there’s no denying that the subject’s brain detected the masked, unseen red under those conditions, and that that influenced what the subject did next. But the fact is that he did not see the red, even though his brain detected it. Indeed, that is why psychologists call this unconscious “perception.” That’s loose talk. (It should have been called unconscious detection.) But in any case, it is unconscious. So it does not qualify as an instance of something that is conscious yet unfelt.

But, by the same token, it also does not qualify as an instance of something that is unconscious yet felt: Felt by whom, if not by the conscious subject? You don’t have to feel a thing in order to “detect”: Thermostats, robots and other sensors do it all the time. Detecting is something you do, not something you feel.

As for blindsight, some of it may be based on feeling after all, just not on visual feeling but other sensory but nonvisual (e.g. kinesthetic) feelings (for example, feeling where one’s eyes are moving, a doing that is under the control of one’s intact but involuntary and visually unconscious subcortical eye-movement system).

But some blindsight may indeed be based on unconscious detection — which is why the patient (who really can’t see) has to be encouraged to point to the object even though he says he can’t see it. It’s rather like automatic writing or speaking in tongues, and it is surprising and somewhat disturbing to the patient, who will try to rationalize (confabulate) when he is told and shown that he keeps pointing correctly in the direction of the object even though he says he can’t see a thing.

But this too is neither unfelt consciousness nor unconscious feeling: If the subject is not conscious of seeing anything, then that means he is not feeling what it feels like to see. And if he’s not feeling it, neither is anything or anyone else in his head feeling it (otherwise we have more than one mind/body problem to deal with!). If he can nevertheless identify the object before his eyes, then this is unfelt doing, not unconscious feeling.

All these findings simply compound the hard problem: If we don’t really have to see anything in order to detect stimuli presented to our eyes, then why does it feel like something to see (most of the time)?

Ditto for any other sense modality, and any other thing we are able to do: Why does it feel like something to do, and be able to do all those things?

And this “why” is not a teleological “why”: It’s a functional why. It’s quite natural, if you have a causal mechanism, consisting of a bunch widgets, to ask: What about this widget: What’s it doing? What causal role is it playing? What do you need it for?

Normally, there are answers to such questions (eventually).

But not in the case of feeling. And that’s why explaining how and why we feel is a “hard” problem, unlike explaining how and why we do, and can do, what we do. Explanations of doing manage just fine, to all intents and purposes, without ever having to mention feeling (except to say it’s present but seems to have no causal function).

DR:If somebody doesn’t want to apply the term ‘feeling’ to the [states] that aren’t conscious, fine; but I’m maintaining that the very same type of state occurs sometimes as conscious qualitative states and sometimes not consciously.

Unfelt detection cannot be the very same state as felt detection otherwise we really would have a metaphysical problem! What you must mean, David, is that the two kinds of states are similar in some respects. That may well be true. But the object of interest is precisely the respect in which the states differ: one is felt and the other is not. What functional difference does feeling make (i.e., why are some states felt states?), and how?

And, to repeat, blueness is a quality (i.e., a property — otherwise “quality” is a weasel-word smuggling in “qualia,” another weasel-word which just means feelings). Blueness is a quality that a conscious seeing subject can feel, by feeling what it’s like to see blue. One can call that a “qualitative state” if one likes (and one likes multiplying synonyms!). But just saying that it feels like something to see blue — and that to feel that something is to be in a felt state — seems to say all that needs to be said.

To detect blue without feeling that something it feels-like is to detect a quality (i.e., a property, not a “quale,” which is necessarily a felt quality), to be sure, but it’s not to be in “qualitative state” — unless a color-detecting sensor is in a qualitative state when it detects blue.

To insist on calling the detection of a quality “being in a qualitative state” sounds as if what you are wanting to invoke is unconscious feelings (“unconscious qualia”). But then one can’t help asking the spooky the question: Well then who on earth or what on earth is feeling those feelings, if it isn’t the conscious subject?

There’s certainly no need to invoke any spooks in me that are feeling feelings I don’t feel when I am a subject in an unconscious perception experiment, since all that’s needed is unconscious processing and unconscious detection, as in a robot or a tea-pot (being heated). A robot could easily simulate masked lexical priming with optical input and word-probability estimation. But no one would want to argue that either the robot or any part of it was feeling a thing, in detecting and naming red, and its knock-on effects on the probability of finding a word rhyming with “dead”…

DR:[Your saying] “Unfelt properties are not ‘qualitative states.’ Qualitative states are felt states'” [is] just the denial of my view.

It depends on what is meant by “qualitative states.” If the robot detecting and naming briefly presented red objects — and subsequently more likely to pick a word that rhymes with “dead” — is an instance of a “qualitative state,” that’s fine, but then we’re clearly talking about the easy problem of doing (responding to optical input, processing words) and not the hard problem of feeling. Neither feeling nor consciousness (if there’s any yet-to-be-announced distinction that can be made between them) plays any part in these same doings in today’s robots.

(All bets are off with Turing-Test-scale robots — but even if they do feel — and I for one believe that Turing robots would feel — we still have to solve the problem of explaining how and why they do feel…)

DR:I don’t understand what it would be for perceiving to be something one feels. For one thing, that begs the question about whether perceiving can occur without being conscious. For another, it seems plain that it can occur without being conscious, as evidenced by subliminal perceiving, and so forth.

If it is unfelt, “perceiving” is just detecting (and responding). We know that countless unfeeling, unconscious devices can do detecting (and responding) without feeling. Hence the question is not whether detection can occur without feeling: it’s how and why some detecting (namely, perceiving) is felt.

The burden of showing that one can make something coherent and substantive out of the putative difference between felt detection and conscious detection is, I have suggested, on the one who wishes to deny that they are one and the same thing. (That’s why “perception” is a weasel-word here, smuggling in the intuition of felt qualities while at the same time denying that anyone is conscious of them.)

So subliminal “perceiving,” if unfelt, is not perceiving at all, but just detecting.

DR:Well, I don’t know that [“perceiving” is] a weasel word – though I agree that it means both [detecting and feeling]. In the nonconscious case it’s (mere) detecting; in the conscious case, it’s conscious detecting, i.e., feeling.

Agreed!

But that does make it seem as if “feeling” and “consciousness” are pretty much of a muchness after all. And that whatever is or can be done via unfeeling/unconscious detection is unproblematic (or, rather, the “easy” problem), and what remains, altogether unsolved and untouched, is our “hard” problem of how and why some detection is felt/conscious…

DR:There are two issues here. One is to explain why some qualitative states – some perceivings – come to be conscious; why don’t all remain subliminal? I think that’s a difficult question, which I address in several publications (e.g., Consciousness and Mind), but I won’t address here.

Perhaps, David, in your next posting you could sketch the explanation, since that is the very question (for you “difficult,” for others, “hard”) that we are discussing here. If we are agreed that “unconscious” = “unfelt” = “subliminal” states are all, alike, the “easy” ones to explain, whereas the “felt” = “conscious” = “supraliminal” states are the “hard” ones to explain (and only weasel-words like “qualitative states” and “perception” have been preventing us from realizing that), then it’s clearly how you address the hard problem (of explaining how and why some states are felt/conscious) that would be of the greatest interest here.

DR:I don’t think, however, that it’s reasonable to assume that everything has a utility or function, so that it can’t be the case that at least some of the utility or functionality of perceiving occurs consciously. Not everything that occurs in an organism is useful for the organism. But that’s for another day.

I couldn’t quite follow the middle clause (beginning “so that it can’t be the case”), but it sounds as if you are suggesting that there may be no functional/causal explanation for why some doing and doing-capacity is felt.

I’m not sure what would be more dissatisfying: that there is no way to explain how and why some functional states are felt, or that some functional states are felt for no reason at all! The first would be a huge, perplexing fact, being doomed to remain unexplained; the other would be a huge, perplexing fact being a mere accident.

DR:[Your saying] “We cannot go on to the (easy) problem of ‘higher-order awareness’ until we have first solved the (hard) problem of awareness (feeling) itself'” begs the question against the higher-order theory – and the occurrence of nonconscious qualitative states.

I think we’ve agreed that calling unfelt/unconscious states “qualitative” is merely a terminological issue. But, on the face of it, bootstrapping to higher-order awareness without first having accounted for awareness (feeling) itself seems rather like the many proofs — before the proof of Fermat’s Last Theorem — of the higher-order theorems that would follow from Fermat’s Last Theorem, if Fermat’s Last Theorem were true. Maths allows these contingent necessary truths — following necessarily from unproved premises — because maths really is just the study of the necessary formal consequences of various assumptions (e.g., axioms).

But here we are not talking about deductive proofs. We are talking about empirical data and (in the case of cognitive science, which is really just a branch reverse bioengineering) the causal mechanisms that generate those empirical data.

So it seems to me that a theory of higher-order consciousness is hanging from a skyhook if it has not first explained consciousness (feeling) itself.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.