{"id":582,"date":"2018-12-31T21:36:42","date_gmt":"2018-12-31T21:36:42","guid":{"rendered":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/?p=582"},"modified":"2018-12-31T21:36:42","modified_gmt":"2018-12-31T21:36:42","slug":"talking-about-feeling-summary-of-forum","status":"publish","type":"post","link":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2018\/12\/31\/talking-about-feeling-summary-of-forum\/","title":{"rendered":"Talking About Feeling: Summary of Forum"},"content":{"rendered":"
In my little essay<\/a> I tried to redraft the problem of consciousness — the “mind\/body problem — as the problem of explaining how and why we feel rather than just do. <\/p>\n It was not meant as a terminological exercise. The usual way we talk about consciousness and mental states uses weasel-words (“conscious,” mental,” “experience”) that are systematically ambiguous about whether we are just talking about access to data (an easy problem, already solved in principle by computation, which is simply an instance of doing) or about felt<\/em> access to data (the hard part being to explain not just the doing but the feeling).<\/p>\n Nor was it meant as a metaphysical exercise: The problem is not one of “existence” (feeling indubitably exists) but of explanation<\/em>: How? Why?<\/p>\n The commentaries were a fair sample, though a small one, of the issues and the kinds of views thinkers have on them today. A much fuller inventory will be presented at the 2012 Summer School on the Evolution and Function of Consciousness<\/a> in Montreal June\/July of next year. Think of this small series of exchanges in the On the Human<\/em> Forum as an overture to that fuller opus.<\/p>\n I have already responded in detail individually to each of the 10 commentators (15 commentaries) so I will just summarize the gist here:<\/p>\n Judith Economos<\/strong> rightly insists, as the only one with privileged access to what’s going on in her mind, that it is not true that she feels everything of which she is conscious: Some of it — the part that is not sensory or emotional — she simply knows<\/em>, though it doesn’t feel like anything to know it. I reply (predictably) that “know,” too, is a weasel-word, ambiguous as between felt and unfelt access to data. So if one is awake (conscious) whilst one is knowing, one is presumably feeling something<\/em>. One is also, presumably, feeling something whilst one is not-knowing something, or knowing something else. If all three of those states feel identical, how does one know the difference? For if “knowing” just refers to having data, then it is just a matter of know-how<\/a> (doing), which is already explained (potentially) by computation, and has nothing to do with consciousness.<\/p>\n Galen Strawson<\/strong> seems to agree with me on the distinction, but prefers “experience” (“with qualitative character”) to “feeling.” Fine — but “experience” alone is ambiguous; and trailing the phrase “with qualitative character” after it seems a bit burdensome to convey what “feel” does in one natural, intuitive, monosyllabic swoop. The substantive disagreement with Galen is about the coherence and explanatory value of “panpsychism” (i.e., the metaphysical hypothesis that feeling, or the potential to feel, is a latent and ubiquitous property of the entire universe) as a solution to the hard problem. The existence of feeling is not in doubt. But calling it a fundamental take-it-or-leave-it basic property of the universe does not explain it; it’s just a metaphysical excuse for the absence of an explanation!<\/p>\n Shimon Edelman<\/strong> is more optimistic about an explanation because there are computational and dynamic ways to “mirror” every discriminable difference (JND) in a system’s input in differences in its internal representations. This would certainly account for every JND a system can discriminate; but discrimination is doing<\/em>: The question of how and why the doing is felt is left untouched.<\/p>\n David Rosenthal<\/strong> interprets the experimental evidence for “unconscious perception” as evidence for “unconscious feeling,” but, to me, that would be the same thing as “unfelt feeling”, which makes no sense. So if it’s not feeling, what is unconscious “perception”? It is unconscious detection and discrimination — in other words, internal data-doings and dispositions that are unproblematic because they are unfelt (the easy problem). If all of our know-how were like that, we’d all be Zombies and there would be no hard problem. David needs unconscious perception to be able to move on to higher-order consciousness (but that is, of course, merely higher-order access<\/em> — the easy part, until\/unless feeling itself is first explained). So this seems like recourse to either a bootstrap or a skyhook.<\/p>\n John Campbell<\/strong> points out that sensorimotor grounding is not enough to explain meaning unless the sensing is felt, and I agree. But he does not explain how or why sensorimotor grounding is felt.<\/p>\n Anil Seth<\/strong> reminds us that many had thought that there was a “hard problem” with explaining life, too, and that that turned out to be wrong. So there’s no reason not to expect that feeling will eventually be explained too. The trouble is that apart from the observable properties of living things (“doings”) there was never anything else that vitalists could ever point to<\/em>, to justify their hunch that life was inexplicable unless one posited an “elan vital<\/em>.” Modern molecular biology has since shown that all the observable properties of life could be explained, without remainder, after all. But in the case of feeling there is<\/em> a property to point to — observable only to the feeler, but as sure as anything can be — that the full explanation of the observable doings leaves out and hence cannot account for. (Perhaps feeling is the property that the vitalists had in mind all along.)<\/p>\n The remaining commentaries seem to be based on misunderstandings: <\/p>\n Bernard Baars<\/strong> took “Turing Robot” to refer to “Turing Machine.” It does not. A Turing Machine is just a formalization of computation. The internal mechanism of a Turing Robot can be computational or dynamical (i.e., any physical process at all, including neurobiological).<\/p>\n Krisztian Gabris<\/strong> thinks feelings are needed to “motivate” us to do what needs to be done. That’s certainly what it feels like to us. But on the face of it, the only thing that’s needed is a disposition<\/em> to do what needs to be done. That’s just know-how and doing, already evident in toy robots and toasters. How and why it (sometimes) feels like something<\/em> to have a disposition to do something remains unexplained.<\/p>\n Joel Marks<\/strong> assumed that the Turing Robot would be an unfeeling Zombie. This is not necessarily true. (I think it would feel — it’s just that we won’t be able to know whether it feels; and even if it does feel, we will be unable to explain how or why.) Hence Joel’s question about whether it would be wrong to create a robot that feared death is equivocal: By definition, if it’s a Zombie, it cannot fear, it can only act as if it feared. (Witnessing that may make us<\/em> feel bad, but the Zombie — if there can be Zombies — would feel nothing at all.) And if the Turing Robot feels, it’s as important to protect it from hurt as it is to protect any other feeling creature from hurt.<\/p>\n","protected":false},"excerpt":{"rendered":" In my little essay I tried to redraft the problem of consciousness — the “mind\/body problem — as the problem of explaining how and why we feel rather than just do. It was not meant as a terminological exercise. The usual way we talk about consciousness and mental states uses weasel-words (“conscious,” mental,” “experience”) that … <\/p>\n