{"id":576,"date":"2018-12-31T21:34:36","date_gmt":"2018-12-31T21:34:36","guid":{"rendered":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/?p=576"},"modified":"2018-12-31T21:34:36","modified_gmt":"2018-12-31T21:34:36","slug":"why-a-disposition-to-feel-and-then-to-do-rather-than-just-a-direct-disposition-to-do","status":"publish","type":"post","link":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2018\/12\/31\/why-a-disposition-to-feel-and-then-to-do-rather-than-just-a-direct-disposition-to-do\/","title":{"rendered":"Why a disposition to feel and then to do — rather than just a direct disposition to do?"},"content":{"rendered":"

(Reply to Krisztian Gabris<\/a>)<\/p>\n

KG:<\/strong> “Take the pain example\u2026 what would happen if for some reason\u2026 a decision is made which goes against the evolutionarily ingrained rules of the system. For example, a hand is left in the fire\u2026 What would be the punishment of such behavior in a Turing robot (other than tissue damage)? Nothing, the robot would go on it\u2019s own business with signals and internal warnings, but it would not feel the pain. Whereas a human would\u2026 feel pain, and would take away the hand\u2026 not only because of [genetic] programming, but because of\u2026 feeling pain.<\/em>“<\/p><\/blockquote>\n

Yours is the natural intuitive explanation for why we feel — the one that feels right. “Why,” after all, is a causal question: Why do we pull our hand out of the fire? Yes, fire causes tissue damage, but that’s not what makes us withdraw our hand (unless we are anaesthetized): It’s because it hurts!<\/p>\n

So surely that’s what pain’s for: To signal tissue damage by causing pain to be felt.<\/p>\n

Why? So you’ll withdraw your hand. Because if your ancestors had been indifferent to tissue damage, they would not have had surviving descendants.<\/p>\n

So you withdraw your hand because it hurts. And it hurts in order to cause you to feel like withdrawing your hand — and therefore you withdraw your hand.<\/p>\n

Injury –> pain –> withdraw hand.<\/p>\n

And the reason the feeling of pain evolved is because those whose ancestors felt pain were more likely to feel like withdrawing their hands than those who did not.<\/p>\n

But let us note that what was needed, for survival, was to withdraw the injured hand — an act, not a sentiment. The pain was a means, not an end. It’s an extra step; and, as I will try to illustrate with other examples, a superfluous extra step, practically speaking. So the hard problem is to explain how and why this extra, apparently superfluous step evolved at all.<\/p>\n

Suppose that what you had chosen for your evolutionary example of the adaptive trait for “motivational” scrutiny had been — rather than the withdrawing of the injured hand — the growing of wings, or the beating of the heart or the dilating of the pupil of the eye. <\/p>\n

You’ll perhaps find it strange to ask about feeling the “motivation” to grow wings (though it’s a reasonable question), because growing is not something we ordinarily think of ourselves as “doing.” But note that the very same question you asked about the evolution of pain — and the “punishment” for non-withdrawal of the injured hand if no one feels the “motivation” to withdraw it — applies to the non-growth of wings. And the answer is the same: <\/p>\n

If we are talking about evolution — which means traits that increase the likelihood of survival and reproduction — then for both the disposition to grow wings and the disposition to withdraw the hand from injury the “reward” is increased likelihood of survival and reproduction; and for both the lack of the disposition to grow wings and the lack of the disposition to withdraw the hand from injury the “punishment” is decreased likelihood of survival and reproduction. <\/p>\n

The very same evolutionary reward\/punishment scenario also applies to the disposition of our hearts to beat which is even more obviously something that our bodies do<\/em> — or, if you want an example of something we do in response to a circumstantial stimulus rather than constantly, there’s pupillary dilation to light intensity. <\/p>\n

Or, if you want something we do voluntarily rather than involuntarily — although that’s begging the question, because it is really the involuntary\/voluntary distinction that poses the “hard” problem and calls for explanation — consider the implicit improvement in skills that occurs without any sense of having done anything deliberately (sometimes even without the feeling that we have improved) in implicit learning<\/a>, or the changes in our dispositions caused by subtle Pavlovian conditioning or Skinnerian reinforcement when we don’t even feel that our dispositions are changing, or the voluntary take-over of breathing — usually involuntary, like the heart-beat.<\/p>\n

And a disposition is a disposition to do, whether it’s to grow, to beat, to dilate to withdraw, to salivate, to smile or to breathe. So the question remains: Why the extra intermediate step of feeling, when the reward and punishment come from the disposition to do?<\/p>\n

The very same reasoning applies to learning itself: We learn to do things — such as what to eat and what to avoid — by trial and error and reward\/punishment. The consequences of doing the right thing feel good and the consequences of doing the wrong thing feel bad, so we learn to do the right thing. “Motivation” again. But again, it is the disposition to do<\/em> the right thing that matters; the feeling of reward and punishment is an extra. Why? Both in evolution and in learning there are consequences (enhanced survival and reproduction in the case of evolution, and enhanced functioning and performance in the case of learning: eating nourishing things gives us energy, eating toxic things makes us sick) and the consequences are sufficient to guide our dispositions to do. But why is any of that felt rather than just done?<\/p>\n

These questions are hard not only because of the underlying problem of causality, but because our intuitions keep telling us that it’s obvious that we need to feel. Yet the causal role of feeling is anything but obvious, if looked at objectively, which means functionally. <\/p>\n

You assumed that a Turing robot would not feel. That’s not at all sure. But let’s consider today’s rudimentary robots, which are as unlikely to feel as a toaster or a stone. Yet even they can already be designed to withdraw damaged limbs, or to learn to withdraw damaged limbs. They need sensors, of course, but it’s not at all clear why they would need feelings (even if we had the slightest clue of how to design feelings!), if the objective is to do — or to learn to do — what needs to be done in order to survive and function. They need to detect<\/em> tissue damage, and then they need to be disposed to do — or disposed to learn to do — whatever needs to be done. <\/p>\n

If (sensible) anti-Creationism impels us to reject arguments from robotic design, consider that in evolution can be simulated computationally in artificial life simulations; and the kinds of traits we build into our robots can therein be shown to evolve by random variation and selection; the same can be done for computer models of learning (which just involve a change in simulation time scale), including computer models of the evolution of the disposition to learn (e.g., Baldwinian evolution<\/a>).<\/p>\n

And lest we propose the superior power of cognition over Pavlovian and Skinnerian learning, remember that the kind of information processing underlying cognition can be implemented (along with its power and benefits) computationally, in unfeeling machines.<\/p>\n

So there is definitely a problem here, of explaining the ostensibly superfluous causal role of feeling in doing. And not only do our intuitions fail us, but so does every objective attempt at the kind of causal explanation that serves us so well in just about every other functional dynamic under the sun. <\/p>\n

To be continued in the 2012 Summer School on the Evolution and Function of Consciousness<\/a>\u2026<\/p>\n","protected":false},"excerpt":{"rendered":"

(Reply to Krisztian Gabris) KG: “Take the pain example\u2026 what would happen if for some reason\u2026 a decision is made which goes against the evolutionarily ingrained rules of the system. For example, a hand is left in the fire\u2026 What would be the punishment of such behavior in a Turing robot (other than tissue damage)? … <\/p>\n