Making Sense of Sensing

“Wouldn’t it short-circuit all these discussions if you just came out and said that this is how you use the word “Feeling”, that is, to mean any conscious notion or awareness whatever, even if it is not a sensation like taste or pain or fear?  You say “feeling” is a nice honest word, while words like “awareness” and “conscious” are weasel words.  But since a lot of us cannot agree that wondering idly whether it will rain next Tuesday is a feeling, then when you say it is because it just has to be, good old honest-yeoman uncorrupt “feeling” slips into weaseldom, or at least mush, just as all the other words do.  

“Perhaps Hofstadter is right: because these words refer to states we cannot point to or compare, words grounded (in your term) only in private experience, then we are simply clashing by night. We don’t really know what each other means by any of them.  I will swear that I can know I am thinking about next Tuesday, or the square root of twelve, and can tell the difference between these notions, but it is all done separate from sensation of any kind.

“I repeat, why CAN’T the brain deliver information to one’s awareness by at least one other avenue than feelings?  To insist that it cannot makes your denial cease to be an empirical statement and become a definition of “feeling”.”

Very good challenge, and I’m happy to try to rise to the occasion!

The brain not only can but does “deliver information” without its being felt. Not only delivers information, but gets things done. 

It does nocturnal deliveries while we’re asleep, of course, but it also does a lot while we’re awake (keeps my heart beating, keeps me upright, and, most important, delivers answers to my (felt) questions served on a platter (“what was that person’s name?”, “where am I going?”, “what word should I say next?) without me feeling any of the work that went into it. 

These are things we do, and feel we do (“find” the name, “recall” where I’m going, “decide” what to say next), but we are clueless about their provenance: We have no idea how we do them. Our brain does them, and then “delivers” the result.

Some of this delivery is delivery of know-how (riding a bike, speaking) and some of it is of know-that (facts, or putative facts). 

We are the “recipients” of the delivery, and the question is, how does our brain do it?

But these are the “easy” questions: Cognitive neuroscience will eventually tell us how our brain does and “delivers” all these things for us.

But that’s not the hard part. The hard part is explaining why and how it feels like something to be the “recipient” of these “deliveries.” If the result of the deliveries were merely doings and sayings, there would be no issue, because there would be nothing mental; it would all just be mechanical, neurosomatic dynamics. 

Now, you are sort of forcing me to do some phenomenology here — something I’m neither particularly good at, nor set great store by, but here goes:

Am I just linguistically legislating that having received a “delivery,” [say, the “information,” X, that it’s Tuesday today] from their brain, what people mean by “I am aware of X” has to be “It feels as if X is the case”?

Or, worse, am I presumptuously denying what is not only other people’s private privilege but (by my own lights) certain and incorrigible, when I say that people are wrong when they insist it doesn’t feel like anything to know it’s Tuesday? Wrong to just settle for saying they just know it, it’s one of those pieces of “information delivered” by their brain, and that’s all there is to it?

That would be fine, it seems to me, if the “delivery” were taking place while you were asleep or anesthetized or comatose. 

But it seems to me (and here I am doing some amateur phenomenology) that the difference between being (dreamlessly) asleep and being awake is that it feels like something to be awake and it does not feel like anything to be dreamlessly asleep.

“Information” “delivered” and even “executed” by my brain while I am asleep is also being served on a platter, just as it’s served on a platter when I’m awake: I’m just not feeling anything the while.

So far you will say you could have substituted “not aware of (a ‘delivery’)” for “not feeling (a ‘delivery’)” and covered the same territory without being committed to its having to feel like something to be aware of something.

But I can only ask, what does it mean to be awake and aware of something if it does not feel like something to be awake and aware of something?

If you reply “It feels like something to be aware of something, but only in the sense that it feels like something while I’m being aware of something, because I happen to be awake, and being awake feels like something” — then I will have to reply that you are losing me, when you say that it feels like something while you receive the “delivery” but that that something it feels like is not what it feels like to receive the delivery!

Yes, our language about this is getting somewhat complicated, so let me remind you that, yes, our difference could be merely terminological here, for much the same reason that (if I remember correctly) you had objected, years ago, to my insistence that seeing, too, is feeling. 

I think you said that feeling tired is feeling, or feeling anger is feeling, and even feeling a rough surface is feeling, but seeing red is not feeling, it’s seeing. And the way I tried to convey what I meant by “feel” was to point out that you too would agree (and you did) that it feels like something (rather than nothing) to see red. And it feels like something different to see green, or to hear middle C or to smell a rose.

I think I even said that it was just our language — which says I am feeling a headache or I am feeling cold or I am feeling a rough surface, yet not “I am feeling red” but rather “I am seeing red,” and not “I am feeling the perfume” (if we don’t mean palpating it but sniffing it) but “I am smelling the perfume” — is fooling us a bit, when we conclude from our wording that seeing is not feeling. 

I think I even mentioned French, in which both feeling and smelling are (literally): “je sens la douleur”, “je sens le parfum,” as is palpating (“je sens la surface”), whereas, as in English, seeing and hearing have verbs of their own.

There is in the French the residue of the Latin “sentio” — to feel — that still exists in English, but as a sort of ambiguous false-friend, “I sense,” which means more “I intuit” or “I pick up on” than “I feel.” But I would say the same thing about sensing: If I sense something, be it sensory, affective, tactual, thermal, cognitive, or intuitive, then it feels like something to be sensing it, and would feel like something else to be sensing something else, as surely as it feels like something to be seeing red and would feel like something else to see something else.

And not just because I happen to be awake while my brain “delivers” the “information”!

So if I am sensing that it’s Wednesday today, then that feels like something, and feels like something different from sensing that it’s Tuesday today as surely (but perhaps not as intensely) as seeing red feels different from seeing blue.  

To put it another way, the result of the “delivery” is not just my “speaking in tongues.” It feels like something not only to say (or think) the words “It’s Wednesday today” but to mean them. And it feels like something else not only to say (or think) but to mean (or understand) something else.

Entropy

Sociopaths, sadists, zealots and lunatics there have always been. But technology has now empowered them to do harm far beyond their numbers: The “normal” distribution is becoming a hostage, perhaps irretrievably, to a reign of terror from its tail-end.

Comments on Doug Hofstadter’s “I Am A Strange Loop”

(1) Is feeling/nonfeeling an all-or-none distinction? 

The answer is most definitely yes. (But the question is not about whether I’m feeling this or that, nor about whether I am feeling more or less. It is about whether I am feeling at all. I can feel a little tired, say, half-tired, but I can’t half-feel — any more than I can half-move [or one can be a little bit pregnant]).) 

(2) Is believing a feeling (and if so, what’s my evidence that that’s true)?

The answer is most definitely yes, and the evidence is of precisely the same kind as the evidence that seeing — or hearing or smelling or hurting — is feeling. There’s something it feels like to smell roses, and when you’re smelling carnations — or onions — it feels different. In exactly the same way (but more subtly), there’s something it feels like to be believing that it’s Tuesday today, and something different it feels like to be believing it’s Wednesday (and not just the sound of the words it takes to say one or the other). Every JND of difference in mental space feels different. That’s what makes mental states mental, and how we tell different mental states apart: Otherwise I wouldn’t know whether or not I was believing it’s Tuesday any more than I would know whether or not I was in pain. (Knowing is feeling too!)

Aside: None of this has anything to do with Zombies (and I have next to nothing to do with or say about Zombies). But just for the sake of logical coherence: A zombie would be a lookalike that behaved and talked indistinguishably from us, but did not feel. It could not be believing it felt, because believing is feeling! It would merely be behaving (and speaking) exactly as if it were feeling (and believing, and believing it was feeling). 

I consider such a possibility so far-fetched and arbitrary as to be absurd, so I never base any argument on the possibility that there could be such a thing. 

However, I do point out that we can no more explain how and why there could not be Zombies than we can explain how or why we feel (the “hard problem“). Zombies are absurd because all the evidence is against them: All the entities that behave as if they feel are in fact, like us, biological organisms that feel. We don’t know how or why we all  feel, but we do know that we invariably do. The speculation that this invariance could be broken — with entities acting exactly as if they felt, but not feeling a thing — is as far-fetched as imagining a universe in which apples fell up rather than down, or the 2nd law of thermodynamics was the reverse. Not only can nothing interesting, one way or the other, be derived from such idle suppositions, but — and this is most important —  even the correct supposition that Zombies are impossible does not do anything whatsoever toward solving the hard problem (of explaining how and why they are impossible, which is equivalent to explaining how and why we feel, rather than just do).

The statement that “believing is seeing” is no less supported, I should think, than “hurting is feeling”: I can’t do much more than ostension and appealing to what I am pretty confident is our fundamentally similar mental lives in either case. (I did make a bit of a supporting argument about JNDs just now. The gist is that the only thing that distinguishes mental states is that they feel different: Otherwise what makes them not the same mental state? The fact that they may be followed by different behavioral dispositions won’t do the trick, because the states are now, not later, so later divergence in behavioral dispositions still doesn’t distinguish the mental states now, when I’m having them. (My knowledge that I believe it’s Tuesday today and that I don’t believe it’s Wednesday cannot come from what I am inclined to do later — unless, of course, it feels different to be inclined to do this rather than that — which would be fine with me; that still leaves the difference between beliefs as a difference in what they feel like…)

Excerpts from Doug Hofstadter’s “I Am a Strange Loop“:

Semantic Quibbling in Universe Z

There is one last matter I wish to deal with, and that has to do with Dave Chalmers’ famous zombie twin in Universe Z.  Recall that this Dave sincerely believes what it is saying when it claims that it enjoys ice cream and purple flowers, but it is in fact telling falsities, since it enjoys nothing at all, since it feels nothing at all ‹ no more than the gears in a Ferris wheel feel something as they mesh and churn.  

I completely agree that this is incoherent — simply because believing is feeling. What Chalmers should have said is that the Zombie behaves and talks exactly as if he was feeling (including believing, and believing that he was feeling) but in fact he was feeling (and hence believing) nothing.

Well, what bothers me here is the uncritical willingness to say that this utterly feelingless Dave believes certain things, and that it even believes them sincerely.  Isn¹t sincere belief a variety of feeling?  Do the gears in a Ferris wheel sincerely believe anything?  I would hope you would say no.  Does the float-ball in a flush toilet sincerely believe anything?  Once again, I would hope you would say no.

I feel sincerely in agreement, and would add only that it is not only a sincere or passionate belief that is felt, but also a phlegmatic, quotidial belief, such as it’s Tuesday today.

And of course all those mechanical devices don’t feel.

And of course talk of Zombies that are like us on the outside and like the Ferris wheel on the inside is nonsense.

So suppose we backed off on the sincerity bit, and merely said that Universe Z¹s Dave believes the falsities that it is uttering about its enjoyment of this and that.  Well, once again, could it not be argued that belief is a kind of feeling?  I¹m not going to make the argument here, because that¹s not my point.  My point is that, like so many distinctions in this complex world of ours, the apparent distinction between phenomena that do involve feelings and phenomena that do not is anything but black and white.

I would and do argue the point that believing is feeling.

But I completely deny the point that the difference between feeling and non-feeling is a matter of degree! It’s all or none. 

The quality and intensity of the feeling may differ (the latter in degree), but whether there is feeling going on at all is not a matter of degree (though feeling be may be flickering, intermittently on/off). In particular, there is nothing (except degrees of doing-power) in between a Ferris wheel, that feels nothing, and, say, an amphioxus which, even if all it can feel is “ouch,” is fully one of us sentients. 

(I also think that near-threshold phenomenology and psychophysics — did I feel something or didn’t I? — is irrelevant to all this, but if one insists on citing it: Feeling is instantaneous. In the instant, you feel what you feel (if you are awake and sentient at all). If the source is a stimulus, it is irrelevant that you are uncertain near-threshold: you are not uncertain about what you felt. You felt whatever you felt. You are uncertain whether what you felt was the stimulation you were supposed to be detecting — whether it was external, from a near-threshold “beep” or endogenous: did I just feel the aura of an impending migraine?).

If I asked you to write down a list of terms that slide gradually from fully emotional and sentient to fully emotionless and unsentient, I think you could probably quite easily do so.

Not me. I could rank intensity, maybe even quality, by degrees, but not whether a feeling is felt! That’s an all or none divide and on the other side of it is not an unfelt feeling, but nothing but unfelt doing (a Ferris Wheel). Again, near-threshold judgments about a particular external or internal stimulus by a feeling person are irrelevant here. They are feeling;and we are just fussing over what they are feeling, not over whether they are feeling at all: that’s an all-or-none matter.

In fact, let¹s give it a quick try right here.  Here are a few verbs that come to my mind, listed roughly in descending order of emotionality and sentience:  agonize, exult, suffer, enjoy, desire, listen, hear, taste, perceive, notice, consider, reason, argue, claim, believe, remember, forget, know, calculate, utter, register, react, bounce, turn, move, stop.

If I’m awake, doing every one of those things feels like something — agonizing as much as tasting or considering or knowing; only quality and intensity differs. 

And of course that includes moving (if it is voluntary and I am not anesthetized).

I won’t claim that my extremely short list of verbs is impeccably ordered; I simply threw it together in an attempt to show that there is unquestionably a spectrum, a set of shades of gray, concerning words that do and that do not suggest the presence of feelings behind the scenes.

There are spectra of feeling quality and feeling quantity, but an all-or-none divide between feeling and nonfeeling. No continuum from me to the Ferris wheel (except doing). And that’s the [hard] problem: doings: easy; feelings: hard…

The tricky question then is:  Which of these verbs (and comparable adjectives, adverbs, nouns, pronouns, etc.) would we be willing to apply to Dave¹s zombie twin in Universe Z?  Is there some precise cutoff line beyond which certain words are disallowed?  Who would determine that cutoff line?

No tricks at all. If there could be a Zombie, it would have to be feeling nothing at all, just doing, not feeling. But supposing that an unfeeling ramified Ferris Wheel could be doing what we are doing now — namely, discussing feeling, mutually intelligibly — is pure fantasy.

To put this in perspective, consider the criteria that we effortlessly apply (I first wrote “unconsciously”, but then I thought that that was a strange word choice, in these circumstances!) when we watch the antics of the humanoid robots R2-D2 and C-3PO in Star Wars.  When one of them acts fearful and tries to flee in what strike us as appropriate circumstances, are we not justified in applying the adjective “frightened”?  

I think most people’s intuitions about cinematic robots are incoherent. They do and don’t believe that they feel. Nothing hangs on such incoherent notions. Here’s the real test: If the robot were real, would they feel compunctions about kicking it? (I think they would, if the robot was sufficiently like us — just as they are with animals. Below, Doug seems to agree too.)

Here’s a piece — not much longer than this excerpt from Doug’s book — addressing this very issue. Punchline: you get out of a fictional robot whatever the author purports to put into it. If it is decreed, however incoherently, that the robot behaves just as if it feels, but it doesn’t. Then so be it. If it is decreed (as in the Spielberg movie) that it does feel, well then it does. Same for decrees that it flies, it can read minds, it can see into the future, it can change the past, it can redesign the universe, square circles, disprove Goedel’s theorem — in fiction, anything goes…

Harnad, S. (2001) Spielberg’s AI: Another Cuddly No-Brainer.  

Or would we need to have obtained some kind of word-usage permit in advance, granted only when the universe that forms the backdrop to the actions in question is a universe imbued with élan mental?  And how is this “scientific” fact about a universe to be determined?

No word-usage-permits for “feeling”: In fiction, go with the flow. In the real world, your mind-reading instincts (along with common sense and the invariant correlation of feeling with organism-like doings) will be your guide, whether you like it or not. (And, of course, you can’t be 100% sure in any case but your own.)

“Science” has nothing to do with it — except maybe if you’re wondering about someone in a coma…

And feeling itself is the élan mental — the trouble is, we don’t know how and why it happens (and, by my lights, we never will, because of limits on the power of causal explanation in any but a counterfactual psychokinetic universe, where feeling really is a causal “force” — but that’s not our universe).

If viewers of a space-adventure movie were “scientifically” informed at the movie’s start that the saga to follow takes place in a universe completely unlike ours ‹ namely, in a universe without a drop of élan mental ‹ would they then watch with utter indifference as some cute-looking robot, rather like R2-D2 or C-3PO (take your pick), got hacked into little tiny pieces by a larger robot?

Of course not: Fiction can dictate our premises, but not our conclusions…

Would parents tell their sobbing children, “Hush now, don’t you bawl!  That silly robot wasn’t alive!  The makers of the movie told us at the start that the universe where it lived doesn’t have creatures with feelings! Not one!”  What’s the difference between being alive and living?  And more importantly, what merits being sobbed over?

You’re asking moral questions, and you’re right to. It is only the existence of feeling that makes morality matter at all. And of course we alas have many psychopathic tendencies, not to mention sadistic ones. I don’t know if it’s parents or experiences or genes that cause some people to be indifferent to or even to enjoy pain in others, but it happens.

But none of this affective evocativeness changes the basic facts: Whether or not an entity feels is all-or-none,

And all mental states (including believing) are felt states: that’s what makes them “mental.” Otherwise they’d just be states, tout court, as in a ferris wheel or a float-ball in a flush toilet…

Functional Explanation is Causal Explanation (Reply to Antonio Chella & Riccardo Manzotti)

(Reply to Antonio Chella & Riccardo Manzotti)

Antonio Chella & Riccardo Manzotti suggest that since we know that feeling exists, any explanation that cannot account for it is inadequate. They also suggest that there is a difference between functional explanation and causal explanation, illustrating the difference with examples from physics. Functional explanation may not explain feeling, but causal explanation may succeed, perhaps partly by scrapping the distinction between states that are internal and external to the brain:

CHELLA & MANZOTTI:since the fact that we feel is an empirical[ly] undeniable fact albeit from a first-person perspective, we should argue against any view that does not predict such possibility.

Except if no causal theory can explain feeling — in which case we are better off with one that can at least explain doing than with no eplanation at all.

CHELLA & MANZOTTI:If feeling [does] not fit into the functional description of reality, so much the worse for functionalism.

So much the worse for any causal explanation. The Turing Robot is “merely” indistinguishable from is in performance capacity, but the Turing biorobot also has equivalent internal processes and states, even if synthetic ones. That’s still normal causal explanation, and remains so even if the biodynamics are natural rather than synthetic.

In other words, there is no wedge to be driven between “functional” explanation and “causal” explanation: All dynamical explanations of feeling are equally ineffectual, for the same reasons: There is neither any causal room for feeling, nor is there any causal need for them.

CHELLA & MANZOTTI:we purposefully shifted from a causal description to a functional one

But unfortunately it is a distinction that marks nothing substantive, and does not solve the “hard” problem of explaining how and why we feel.

CHELLA & MANZOTTI:the equations for gravity and electromagnetism have the same form… The two cases are functionally identical. Yet, they are different both in causal and in physical terms since the physical properties (or powers) which are responsible for the two situations are very different (on one hand, mass and gravity and, on the other hand, electric charge and electromagnetic force)

The equations are equivalent at one level of description, but they are not a complete description. Both mass and charge are measurable, describable, predictable physical properties — unlike feelings, which certainly exist, but do not otherwise enter into the causal matrix.

CHELLA & MANZOTTI:What is still missing is a theory outlining a conceptual and causal connection between neural activity and phenomenal experience and functionalism does not seem to possess the resources to do it.

Nor does any other causal theory.

CHELLA & MANZOTTI:[In] Harnad’s… conception… internal and external… refer to physical events internal or external to the brain as if the brain boundaries were some kind of relevant threshold…

Yes, mental states (feelings) — for which I recommend a migraine headache as a paradigmatic example — occur in the head, not outside it. Both doings and their functional substrate can be distributed beyond the bounds of a head, but feelings (until further notice) cannot…

For a critique of the notion of the “extended mind,” see:

Dror, I. and Harnad, S. (2009) Offloading Cognition onto Cognitive Technology. In Dror, I. and Harnad, S. (Eds) (2009): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins

CHELLA & MANZOTTI:assuming that the mind is indeed internal to anything may be a misleading

It is misleading to mix up “in the head” with “in the mind.” But “mind” is a weasel word. To have a mind is to feel. And there is no reason to doubt that a headache cannot be wider than a head…

Talking About Feeling: Summary of Forum

In my little essay I tried to redraft the problem of consciousness — the “mind/body problem — as the problem of explaining how and why we feel rather than just do.

It was not meant as a terminological exercise. The usual way we talk about consciousness and mental states uses weasel-words (“conscious,” mental,” “experience”) that are systematically ambiguous about whether we are just talking about access to data (an easy problem, already solved in principle by computation, which is simply an instance of doing) or about felt access to data (the hard part being to explain not just the doing but the feeling).

Nor was it meant as a metaphysical exercise: The problem is not one of “existence” (feeling indubitably exists) but of explanation: How? Why?

The commentaries were a fair sample, though a small one, of the issues and the kinds of views thinkers have on them today. A much fuller inventory will be presented at the 2012 Summer School on the Evolution and Function of Consciousness in Montreal June/July of next year. Think of this small series of exchanges in the On the Human Forum as an overture to that fuller opus.

I have already responded in detail individually to each of the 10 commentators (15 commentaries) so I will just summarize the gist here:

Judith Economos rightly insists, as the only one with privileged access to what’s going on in her mind, that it is not true that she feels everything of which she is conscious: Some of it — the part that is not sensory or emotional — she simply knows, though it doesn’t feel like anything to know it. I reply (predictably) that “know,” too, is a weasel-word, ambiguous as between felt and unfelt access to data. So if one is awake (conscious) whilst one is knowing, one is presumably feeling something. One is also, presumably, feeling something whilst one is not-knowing something, or knowing something else. If all three of those states feel identical, how does one know the difference? For if “knowing” just refers to having data, then it is just a matter of know-how (doing), which is already explained (potentially) by computation, and has nothing to do with consciousness.

Galen Strawson seems to agree with me on the distinction, but prefers “experience” (“with qualitative character”) to “feeling.” Fine — but “experience” alone is ambiguous; and trailing the phrase “with qualitative character” after it seems a bit burdensome to convey what “feel” does in one natural, intuitive, monosyllabic swoop. The substantive disagreement with Galen is about the coherence and explanatory value of “panpsychism” (i.e., the metaphysical hypothesis that feeling, or the potential to feel, is a latent and ubiquitous property of the entire universe) as a solution to the hard problem. The existence of feeling is not in doubt. But calling it a fundamental take-it-or-leave-it basic property of the universe does not explain it; it’s just a metaphysical excuse for the absence of an explanation!

Shimon Edelman is more optimistic about an explanation because there are computational and dynamic ways to “mirror” every discriminable difference (JND) in a system’s input in differences in its internal representations. This would certainly account for every JND a system can discriminate; but discrimination is doing: The question of how and why the doing is felt is left untouched.

David Rosenthal interprets the experimental evidence for “unconscious perception” as evidence for “unconscious feeling,” but, to me, that would be the same thing as “unfelt feeling”, which makes no sense. So if it’s not feeling, what is unconscious “perception”? It is unconscious detection and discrimination — in other words, internal data-doings and dispositions that are unproblematic because they are unfelt (the easy problem). If all of our know-how were like that, we’d all be Zombies and there would be no hard problem. David needs unconscious perception to be able to move on to higher-order consciousness (but that is, of course, merely higher-order access — the easy part, until/unless feeling itself is first explained). So this seems like recourse to either a bootstrap or a skyhook.

John Campbell points out that sensorimotor grounding is not enough to explain meaning unless the sensing is felt, and I agree. But he does not explain how or why sensorimotor grounding is felt.

Anil Seth reminds us that many had thought that there was a “hard problem” with explaining life, too, and that that turned out to be wrong. So there’s no reason not to expect that feeling will eventually be explained too. The trouble is that apart from the observable properties of living things (“doings”) there was never anything else that vitalists could ever point to, to justify their hunch that life was inexplicable unless one posited an “elan vital.” Modern molecular biology has since shown that all the observable properties of life could be explained, without remainder, after all. But in the case of feeling there is a property to point to — observable only to the feeler, but as sure as anything can be — that the full explanation of the observable doings leaves out and hence cannot account for. (Perhaps feeling is the property that the vitalists had in mind all along.)

The remaining commentaries seem to be based on misunderstandings:

Bernard Baars took “Turing Robot” to refer to “Turing Machine.” It does not. A Turing Machine is just a formalization of computation. The internal mechanism of a Turing Robot can be computational or dynamical (i.e., any physical process at all, including neurobiological).

Krisztian Gabris thinks feelings are needed to “motivate” us to do what needs to be done. That’s certainly what it feels like to us. But on the face of it, the only thing that’s needed is a disposition to do what needs to be done. That’s just know-how and doing, already evident in toy robots and toasters. How and why it (sometimes) feels like something to have a disposition to do something remains unexplained.

Joel Marks assumed that the Turing Robot would be an unfeeling Zombie. This is not necessarily true. (I think it would feel — it’s just that we won’t be able to know whether it feels; and even if it does feel, we will be unable to explain how or why.) Hence Joel’s question about whether it would be wrong to create a robot that feared death is equivocal: By definition, if it’s a Zombie, it cannot fear, it can only act as if it feared. (Witnessing that may make us feel bad, but the Zombie — if there can be Zombies — would feel nothing at all.) And if the Turing Robot feels, it’s as important to protect it from hurt as it is to protect any other feeling creature from hurt.

A Fifth Force: But An Acausal One… (Reply to Galen Strawson-2)

(Reply to Galen Strawson-2)

Galen Strawson does a brilliant, heroic job with panpsychism:

The only thing we know for sure — indeed, with a Cartesian certainty that is as apodictic as the logical necessity of mathematics — is that and what we feel.

Everything else we know (or believe we know), we likewise know “through” feeling — in that it feels like something to learn it and it feels like something to know it.

(It feels like something to make an “empirical” observation. It feels like something to understand that something is the case. It feels like something to understand an inference or a causal explanation.)

So feeling is certain, whereas physics (“doing,” in my parlance) is not certain.

But we are realists, trying to do the best we can to explain reality — not extreme sceptics, doubting everything that is not absolutely certain, even if it’s highly probable.

We are just looking for truth, not necessarily certainty.

“Experience” is a weasel-word because it can mean either feeling something — which is highly problematic (the “hard problem) — or it can just mean acquiring empirical data (as in: “this machine had the solution built in, that machine learned it from experience”) — which is unproblematic (doing, the “easy” problem).

So whereas it is true that the only thing we know for sure (besides the things that are necessarily true on pain of contradiction) is that feeling exists, neither everyday life nor science requires certainty. High probability on the evidence (data) will do.

And although it is true that all evidence is felt evidence, it is only the fact that it is felt that is certain. The evidence itself (doing) is only probable.

In other words, although they always accompany the data-acquisition (doing), the feelings are fallible. We feel things that are both true and untrue about the world, and the only way to test them out is via doings. It is true that the data from those doings are also felt. But the felt data are answerable to the doings, and not to the fact that they are felt.

And not only are our feelings fallible, as regards the truth: they also seem to be causally superfluous. Doings (including data-acquisition) alone are enough, for evolution, as well as for learning. Some doings are undeniably felt, but the question is: how and why?

When we are doing physics (or chemistry, or biology, or engineering) and causal explanation (rather than metaphysics), we have to explain the facts, amongst which one fact — the fact that we feel — seems pretty refractory to any sort of explanation except if we suppose that feeling is simply a basic property of the universe (whether local to the organisms in the earth’s biosphere [Galen’s “micropsychism”] or somehow smeared all over the universe [“panpsychism”].)

There’s no doubt that feeling exists, so in that sense feeling is indeed a property of the universe. But with all other properties — doings, all — we have become accustomed to being able(in practice, or at least in principle) to give a causal explanation of them in terms of the four fundamental forces (electromagnetism, gravitation, strong subatomic, weak subatomic). Those forces themselves we accept as given: properties of the universe such as it is, for which no further explanation is possible.

Galen’s metaphysics would require adding something like a fifth member to this fundamental quartet — feeling — with the difference that, unlike the others, it is not an independent force, it does not itself cause and thereby explain doings causally, but rather is merely correlated with them, inexplicably, for some doings.

And our justification for adding a fifth acausal force? The fact that it is inexplicably (but truly) correlated with some doings (all doings that we feel). If feeling had truly been a 5th force (causal rather than acausal), namely, “psychokinesis” (“mind over matter”), then that would indeed have merited elevating it to fundamental status, exempt from further explanation along with the other four.

But there is not a shred of evidence for psychokinesis as a causal force (and all attempts to measure psychokinesis have failed, because the other four forces already covered all the causal territory — doing — with no remainder and no further room for causal intervention).

So all we have, inexplicably, is the fact that we feel. I don’t think that that fact warrants any further metaphysics than that: feeling definitely exists — and, unlike anything else, exists with certainty rather than just probably. It also happens to feel like something to find out and understand anything we know. The rest is an epistemic problem: why and how does getting or having data feel like something (for feeling creatures like us)?

Neither “micropsychism” nor “panpsychism” answer this question. They just take it for granted that it is so.

Home Truths About Doing, Feeling, Explaining and Robots (Reply to Shikha Singh)

HOME TRUTHS ABOUT FEELING, DOING, EXPLAINING AND ROBOTS (Reply to Shikha Singh)

Doings are observable by anyone (via senses or senses plus measuring instruments).

Feelings are observable only to their feeler.

The only feelings a feeler can feel are his own.

That other people and animals feel is a safe guess, because they are related to and resemble us.

That today’s man-made robots feel is as unlikely as that a toaster or stone feels.

That a robot whose doings are Turing indistinguishable from the rest of us for a lifetime would feel would be almost as safe a guess as that other people and animals feel. (Perhaps a biorobot would be an even safer guess).

A robot is just an autonomous causal system that can do some things that people and animals can do.

Cognitive science is about discovering the causal mechanism that generates our capacity to do what we can do. (We can think of it as discovering what kind of robots we are.)

No one but the Turing robot can know whether its causal mechanism does generate feeling.

And even if it does, not even the Turing robot can explain or know how or why.

Why a disposition to feel and then to do — rather than just a direct disposition to do?

(Reply to Krisztian Gabris)

KG:Take the pain example… what would happen if for some reason… a decision is made which goes against the evolutionarily ingrained rules of the system. For example, a hand is left in the fire… What would be the punishment of such behavior in a Turing robot (other than tissue damage)? Nothing, the robot would go on it’s own business with signals and internal warnings, but it would not feel the pain. Whereas a human would… feel pain, and would take away the hand… not only because of [genetic] programming, but because of… feeling pain.

Yours is the natural intuitive explanation for why we feel — the one that feels right. “Why,” after all, is a causal question: Why do we pull our hand out of the fire? Yes, fire causes tissue damage, but that’s not what makes us withdraw our hand (unless we are anaesthetized): It’s because it hurts!

So surely that’s what pain’s for: To signal tissue damage by causing pain to be felt.

Why? So you’ll withdraw your hand. Because if your ancestors had been indifferent to tissue damage, they would not have had surviving descendants.

So you withdraw your hand because it hurts. And it hurts in order to cause you to feel like withdrawing your hand — and therefore you withdraw your hand.

Injury –> pain –> withdraw hand.

And the reason the feeling of pain evolved is because those whose ancestors felt pain were more likely to feel like withdrawing their hands than those who did not.

But let us note that what was needed, for survival, was to withdraw the injured hand — an act, not a sentiment. The pain was a means, not an end. It’s an extra step; and, as I will try to illustrate with other examples, a superfluous extra step, practically speaking. So the hard problem is to explain how and why this extra, apparently superfluous step evolved at all.

Suppose that what you had chosen for your evolutionary example of the adaptive trait for “motivational” scrutiny had been — rather than the withdrawing of the injured hand — the growing of wings, or the beating of the heart or the dilating of the pupil of the eye.

You’ll perhaps find it strange to ask about feeling the “motivation” to grow wings (though it’s a reasonable question), because growing is not something we ordinarily think of ourselves as “doing.” But note that the very same question you asked about the evolution of pain — and the “punishment” for non-withdrawal of the injured hand if no one feels the “motivation” to withdraw it — applies to the non-growth of wings. And the answer is the same:

If we are talking about evolution — which means traits that increase the likelihood of survival and reproduction — then for both the disposition to grow wings and the disposition to withdraw the hand from injury the “reward” is increased likelihood of survival and reproduction; and for both the lack of the disposition to grow wings and the lack of the disposition to withdraw the hand from injury the “punishment” is decreased likelihood of survival and reproduction.

The very same evolutionary reward/punishment scenario also applies to the disposition of our hearts to beat which is even more obviously something that our bodies do — or, if you want an example of something we do in response to a circumstantial stimulus rather than constantly, there’s pupillary dilation to light intensity.

Or, if you want something we do voluntarily rather than involuntarily — although that’s begging the question, because it is really the involuntary/voluntary distinction that poses the “hard” problem and calls for explanation — consider the implicit improvement in skills that occurs without any sense of having done anything deliberately (sometimes even without the feeling that we have improved) in implicit learning, or the changes in our dispositions caused by subtle Pavlovian conditioning or Skinnerian reinforcement when we don’t even feel that our dispositions are changing, or the voluntary take-over of breathing — usually involuntary, like the heart-beat.

And a disposition is a disposition to do, whether it’s to grow, to beat, to dilate to withdraw, to salivate, to smile or to breathe. So the question remains: Why the extra intermediate step of feeling, when the reward and punishment come from the disposition to do?

The very same reasoning applies to learning itself: We learn to do things — such as what to eat and what to avoid — by trial and error and reward/punishment. The consequences of doing the right thing feel good and the consequences of doing the wrong thing feel bad, so we learn to do the right thing. “Motivation” again. But again, it is the disposition to do the right thing that matters; the feeling of reward and punishment is an extra. Why? Both in evolution and in learning there are consequences (enhanced survival and reproduction in the case of evolution, and enhanced functioning and performance in the case of learning: eating nourishing things gives us energy, eating toxic things makes us sick) and the consequences are sufficient to guide our dispositions to do. But why is any of that felt rather than just done?

These questions are hard not only because of the underlying problem of causality, but because our intuitions keep telling us that it’s obvious that we need to feel. Yet the causal role of feeling is anything but obvious, if looked at objectively, which means functionally.

You assumed that a Turing robot would not feel. That’s not at all sure. But let’s consider today’s rudimentary robots, which are as unlikely to feel as a toaster or a stone. Yet even they can already be designed to withdraw damaged limbs, or to learn to withdraw damaged limbs. They need sensors, of course, but it’s not at all clear why they would need feelings (even if we had the slightest clue of how to design feelings!), if the objective is to do — or to learn to do — what needs to be done in order to survive and function. They need to detect tissue damage, and then they need to be disposed to do — or disposed to learn to do — whatever needs to be done.

If (sensible) anti-Creationism impels us to reject arguments from robotic design, consider that in evolution can be simulated computationally in artificial life simulations; and the kinds of traits we build into our robots can therein be shown to evolve by random variation and selection; the same can be done for computer models of learning (which just involve a change in simulation time scale), including computer models of the evolution of the disposition to learn (e.g., Baldwinian evolution).

And lest we propose the superior power of cognition over Pavlovian and Skinnerian learning, remember that the kind of information processing underlying cognition can be implemented (along with its power and benefits) computationally, in unfeeling machines.

So there is definitely a problem here, of explaining the ostensibly superfluous causal role of feeling in doing. And not only do our intuitions fail us, but so does every objective attempt at the kind of causal explanation that serves us so well in just about every other functional dynamic under the sun.

To be continued in the 2012 Summer School on the Evolution and Function of Consciousness

A Turing Robot Is Not a Turing Machine (Reply to Bernard Baars)

(Reply to Bernard Baars)

I don’t think anyone on any side of this discussion has said that the brain is a Turing Machine. The one who comes closest, Shimon Edelman, explicitly says “I argue that feelings in fact are computations, albeit not Turing computations.”

A Turing robot (i.e., a robot capable of passing the Turing Test, indistinguishably from any of the rest of us, for a lifetime) is not a computer (Turing machine). It is a dynamical system, with sensors and effectors, and on the inside it may be implementing any processes — whether dynamic or computational — that give it the capacity to pass the Turing Test, Turing computation being only one among the many possible processes.

The “weak” version of the Church-Turing Thesis is that everything that is “effectively computable” for a mathematician is computable by a Turing Machine.

The strong version of the Church-Turing Thesis is that Turing computation (digital computation) can simulate and approximate (just about) any dynamical physical process in the universe, including sensors and effectors, as well as analog continuous, parallel, distributed processes (such as internal rotation), and indeed also just about any neuro-chemical brain processes (perhaps excluding quantum and chaotic processes). But that simulation is only formal. A purely computational airplane does not fly. And a purely computational brain does not cognize (nor, a fortiori, does it feel). Nor does a purely computational robot (a “virtual robot”).

It is an empirical question, however, what and how much of the actual internal functioning of a Turing robot (or brain) could be performed by Turing computation.

What’s sure is that it cannot be all of it.

BB:I realize that traditionally Turing Machines are taken to be abstract versions of all possible computational implementations, including bio computation. If you can therefore prove, or quasi-prove, that something is possible or impossible for a Turing Machine that is taken to apply to all possible computers. The trouble is that the assumption is wrong.

The strong version of the Church-Turing Thesis holds that Turing computation can simulate and approximate (just about) any dynamical physical process — not that it can stand in for any dynamical physical process. You can’t fly to Chicago on a simulated airplane; flying is not computation. But computation can decompose and test the causal explanation of flying (or cognition).

BB:1. Turing Machines have no memory, and no time, and no string limits. Those are non-biological assumptions.

Turing machines are formal abstractions, but they can be implemented in real finite-state dynamical systems, for example, digital computers (which do have memories, clocks and length limits).

BB:2. Turing Machines are rigidly serial, when the brain is a massively parallel, and parallel-interactive organ.

Yes, but as noted, nobody says the brain is a Turing machine, just that the brain can be simulated computationally by a Turing machine.

BB:3. While it is argued that TM’s can simulate parallel and parallel-interactive computations, that is plausible only because TM’s totally ignore memory, time, and finite string limits.

They can simulate them because the parallelism is simulated serially, in virtual rather than real time.

BB:4. I believe that Stan Franklin and a colleague have given a formal proof that contrary to earlier claims, there are formal machines that are more powerful mathematically than Turing Machines. This vitiates the whole standard use of TMs.

The subject of hypercomputation is controversial and I think the “hard” problem of explaining feeling is hard enough without complicating it with speculations about hypercomputation (or quantum mechanics!).

The weak Church-Turing Thesis stands unrefuted to date: Whatever mathematicians have regarded as computation has turned out to be Turing machine-computable.

The strong Church-Turing Thesis does not hold that everything is computer-simulable, only just-about everything.

BB:5. Consciousness and qualia are biological entities, which are selectionist rather than instructionist in principle (GM Edelman), and reflect a huge evolutionary history — 200 million for mammals alone.

No doubt. But feeling (i.e., consciousness, qualia) poses a special, hard hard problem, both for evolutionary explanation and for functional/causal explanation. This problem will be the subject of the 2012 Summer School on the Evolution and Function of Consciousness at the Université du Québec à Montreal in June/July 2012 in which many of the contributors to this discussion (including Bernie Baars) and many other thinkers will be participating. (The Summer School will also be in commemoration of the centennial of Turing’s birth in June 1912).

BB:6. We have a long and repeated history of ‘impossibility proofs” designed to falsify important empirical advances. Newton’s action at a distance, the molecular basis of life, etc. These efforts routinely fail, though they sometimes do so in interesting ways.

Explaining how and why we feel is hard (indeed, I think, impossible), but the reason has nothing to do with Turing machines or computation, nor with either the weak or the strong Church-Turing Thesis. (See “Vitalism, Animism and Feeling (Reply to Anil Seth)” in this discussion.)

BB:7. There is no substitute for looking at nature.

Logic is an ineluctable part of nature too…

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer