Consciousness Offline: Le Salon des Refusés

On 2013-02-18, at 9:09 AM, Consciousness Online [Richard Brown] wrote:

Hi Stevan, your recent comment (below) has not been approved. It is not relevant to the session. This session is not about the hard problem of consciousness (or the mind body problem). That debate has (more than) run its course in your session from two years ago. Thank you for you understanding.

Richard: Are you joking? Did you watch the video we were supposed to comment on?

This is getting a little ridiculous. I think your theoretical preferences are getting the better of your objectivity.

Of course this is about the mind/body problem. Of course it’s about the “hard” problem. What on earth else do you think it’s about?

I’ve just about had it now with this arbitrary dismissiveness.

And I don’t appreciate the remark about “more than running its course”.

Restore my commentary or kindly take me off the list and send me no more messages about “Consciousness Online.

I don’t have the time to write focussed, substantive commentaries only to have them rebuffed because they don’t meet someone’s tastes or preconceptions.

Stevan


COUNTING THE WRONG CONSCIOUSNESS OUT

Stevan Harnad

[Commentary on Dan Dennett on “On a Phenomenal Confusion about Access and Consciousness“]
Yes, there was a phenomenal confusion in doubling our mind-body-problems by doubling our consciousnesses.

No, organisms don’t have both an “access consciousness” and a “phenomenal consciousness.”

Organisms’ brains (like robots’ brains) have access to information (data).

Access to data can be unconscious (in organisms and robots) or conscious (in organisms, sometimes, but probably not at all in robots, so far).

And organisms feel. Feeling can only be conscious, because feeling is consciousness.

So the confusion is in overlooking the fact that there can be either felt access (conscious) or unfelt access (unconscious).

The mind-body problem is of course the problem of explaining how and why all access is not just unfelt access. After all, the Darwinian job is just to do what needs to be done, not to bask in phenomenology.

Hence it is not a solution to say that all access is unfelt access and that feeling — or the idea that organisms feel — is just some sort of a confusion, illusion, or action!

If, instead, feeling has or is some sort of function, let’s hear what it is!

(Back to the [one, single, familiar] mind/body problem — lately, fashionably, called the “hard” one.)

More prior commentaries here.

To comment further, please go to Philpapers.


ILL-JUSTIFIED TRUE BELIEF
Organisms with nervous systems don’t just do what needs to be done in order to survive and reproduce. They also feel. That includes all vertebrates and probably all invertebrates too. (As a vegan, I profoundly hope that plants don’t feel!)

There’s no way to know for sure (or to “prove”) that anyone else but me feels. But let’s agree that for vertebrates it’s highly likely and for computers and today’s robots (and for teapots and cumquats) it’s highly unlikely.

Do we all know what we mean when we say organisms feel? I think we do. I have no way to argue against someone who says he has no idea what it means to feel — meaning feel anything at all — and the usual solution (a pinch) is no solution if one is bent on denying.*

You can say`’I can sorta feel that the temperature may be rising” or “I can sorta feel that this surface may be slightly curved.” But it makes no sense to say that organisms just “sorta feel” simpliciter (or no more sense than saying that someone is sorta pregnant):

The feeling may feel like anything; it may be veridical (if the temperature is indeed rising or the surface is indeed curved) or it may be illusory. It may feel strong or weak, continuous or intermittent, it may feel like this or it may feel like that. But either something is being felt or not. I think we all know exactly what we are talking about here. And it’s not about proving whether (or when or where or what) another organism feels: it’s about our 1st-hand sense of what it feels like to feel — anything at all. No sorta’s about it.

The hard problem is not about proving whether or not an organism or artifact is feeling. We know (well enough) that organisms feel. The hard problem is explaining how and why organisms feel, rather than just do, unfeelingly. (Because, no, introspection certainly does not tell us that feeling is whatever we are doing when we feel! I do fully believe that my brain somehow causes feeling: I just want to know how and why: How and why is causing unfelt doing not enough? No “rathering” in that!)

After all, on the face of it, doing is all the Blind Watchmaker really needs, in order to get the adaptive job done (and He’s no more able to prove that organisms feel than any of the rest of us is).

The only mystery is hence how and why organisms feel, rather than just do. Because doing-power seems like the only thing organisms need in order to get by in this Darwinian world. And although I no more believe in the possibility of Zombies than I do in the possibility of their passing the Turing Test, I certainly admit frankly that I haven’t the faintest idea how or why there cannot be Zombies. (Do you really think, Dan, that that’s on a par with the claim that one hasn’t the faintest idea what “feelings” are?)

*My suspicion is that the strategy of feigning ignorance about what is meant by the word “feeling” is like feigning ignorance about any and every predicate: Whenever someone asks what “X” means, I can claim I don’t know. And then when they try to define “X” for me in terms of other predicates, I can claim I don’t know what those mean either; all the way down. That’s the “symbol grounding problem,” and the solution is direct sensorimotor grounding of at least some of the bottom predicates, so the rest can be reached by recombining the grounded ones into propositions to define and ground the ungrounded ones. That way, my doings would contradict my verbal denial of knowing the meanings of the predicates. But of course sensing need not be felt sensing: it could just be detecting and responding, which is again just doing. So just as a toy robot today could go through the motions of detecting and responding to “red” and even say “I know what it feels like to see red” without feeling a thing, just doing, so, in principle, might a Turing-Test-Passing Cog just be going through the motions. This either shows (as I think it does) that sensorimotor grounding is not the same as meaning, or, if it doesn’t show that, then someone still owes me an explanation of how and why not. And this, despite the fact that I too happen to believe that nothing could pass the Turing Test without feeling or meaning. It’s just that I insist on being quite candid that I have no idea of how or why this is true, if, as I unreservedly believe, it is indeed true. It’s an ill-justified true belief. Justifying it is the hard problem.


FEELING BY FIAT

@Richard Brown: “felt representing (i.e. consciousness) occurs when one represents oneself as being in some other representation in a way that seems subjectively unmediated… There is no equivocation here; the claim is that feeling (i.e. consciousness) consists in a certain kind of cognitive access. What’s the argument against this view? That there can be these kinds of representations without feeling? That is called begging the question.”

The argument against this claim is that it is an ad hoc posit: an attempt to solve a substantive problem by definition.

My critique is on-topic (access vs. feeling), the matter is far from settled, and neither your comments nor mine prevent Dan or anyone else from responding.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.