Talking About Feeling: Summary of Forum

In my little essay I tried to redraft the problem of consciousness — the “mind/body problem — as the problem of explaining how and why we feel rather than just do.

It was not meant as a terminological exercise. The usual way we talk about consciousness and mental states uses weasel-words (“conscious,” mental,” “experience”) that are systematically ambiguous about whether we are just talking about access to data (an easy problem, already solved in principle by computation, which is simply an instance of doing) or about felt access to data (the hard part being to explain not just the doing but the feeling).

Nor was it meant as a metaphysical exercise: The problem is not one of “existence” (feeling indubitably exists) but of explanation: How? Why?

The commentaries were a fair sample, though a small one, of the issues and the kinds of views thinkers have on them today. A much fuller inventory will be presented at the 2012 Summer School on the Evolution and Function of Consciousness in Montreal June/July of next year. Think of this small series of exchanges in the On the Human Forum as an overture to that fuller opus.

I have already responded in detail individually to each of the 10 commentators (15 commentaries) so I will just summarize the gist here:

Judith Economos rightly insists, as the only one with privileged access to what’s going on in her mind, that it is not true that she feels everything of which she is conscious: Some of it — the part that is not sensory or emotional — she simply knows, though it doesn’t feel like anything to know it. I reply (predictably) that “know,” too, is a weasel-word, ambiguous as between felt and unfelt access to data. So if one is awake (conscious) whilst one is knowing, one is presumably feeling something. One is also, presumably, feeling something whilst one is not-knowing something, or knowing something else. If all three of those states feel identical, how does one know the difference? For if “knowing” just refers to having data, then it is just a matter of know-how (doing), which is already explained (potentially) by computation, and has nothing to do with consciousness.

Galen Strawson seems to agree with me on the distinction, but prefers “experience” (“with qualitative character”) to “feeling.” Fine — but “experience” alone is ambiguous; and trailing the phrase “with qualitative character” after it seems a bit burdensome to convey what “feel” does in one natural, intuitive, monosyllabic swoop. The substantive disagreement with Galen is about the coherence and explanatory value of “panpsychism” (i.e., the metaphysical hypothesis that feeling, or the potential to feel, is a latent and ubiquitous property of the entire universe) as a solution to the hard problem. The existence of feeling is not in doubt. But calling it a fundamental take-it-or-leave-it basic property of the universe does not explain it; it’s just a metaphysical excuse for the absence of an explanation!

Shimon Edelman is more optimistic about an explanation because there are computational and dynamic ways to “mirror” every discriminable difference (JND) in a system’s input in differences in its internal representations. This would certainly account for every JND a system can discriminate; but discrimination is doing: The question of how and why the doing is felt is left untouched.

David Rosenthal interprets the experimental evidence for “unconscious perception” as evidence for “unconscious feeling,” but, to me, that would be the same thing as “unfelt feeling”, which makes no sense. So if it’s not feeling, what is unconscious “perception”? It is unconscious detection and discrimination — in other words, internal data-doings and dispositions that are unproblematic because they are unfelt (the easy problem). If all of our know-how were like that, we’d all be Zombies and there would be no hard problem. David needs unconscious perception to be able to move on to higher-order consciousness (but that is, of course, merely higher-order access — the easy part, until/unless feeling itself is first explained). So this seems like recourse to either a bootstrap or a skyhook.

John Campbell points out that sensorimotor grounding is not enough to explain meaning unless the sensing is felt, and I agree. But he does not explain how or why sensorimotor grounding is felt.

Anil Seth reminds us that many had thought that there was a “hard problem” with explaining life, too, and that that turned out to be wrong. So there’s no reason not to expect that feeling will eventually be explained too. The trouble is that apart from the observable properties of living things (“doings”) there was never anything else that vitalists could ever point to, to justify their hunch that life was inexplicable unless one posited an “elan vital.” Modern molecular biology has since shown that all the observable properties of life could be explained, without remainder, after all. But in the case of feeling there is a property to point to — observable only to the feeler, but as sure as anything can be — that the full explanation of the observable doings leaves out and hence cannot account for. (Perhaps feeling is the property that the vitalists had in mind all along.)

The remaining commentaries seem to be based on misunderstandings:

Bernard Baars took “Turing Robot” to refer to “Turing Machine.” It does not. A Turing Machine is just a formalization of computation. The internal mechanism of a Turing Robot can be computational or dynamical (i.e., any physical process at all, including neurobiological).

Krisztian Gabris thinks feelings are needed to “motivate” us to do what needs to be done. That’s certainly what it feels like to us. But on the face of it, the only thing that’s needed is a disposition to do what needs to be done. That’s just know-how and doing, already evident in toy robots and toasters. How and why it (sometimes) feels like something to have a disposition to do something remains unexplained.

Joel Marks assumed that the Turing Robot would be an unfeeling Zombie. This is not necessarily true. (I think it would feel — it’s just that we won’t be able to know whether it feels; and even if it does feel, we will be unable to explain how or why.) Hence Joel’s question about whether it would be wrong to create a robot that feared death is equivocal: By definition, if it’s a Zombie, it cannot fear, it can only act as if it feared. (Witnessing that may make us feel bad, but the Zombie — if there can be Zombies — would feel nothing at all.) And if the Turing Robot feels, it’s as important to protect it from hurt as it is to protect any other feeling creature from hurt.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.