Feeling vs. Moving

Sentience — which means the capacity to feel *something* (anything) — can differ in quality (seeing red feels different from hearing a cricket), or in intensity (getting kicked hard feels worse than getting lightly tapped) or in duration (now you feel, now you don’t).

But the difference between whether an organism has or lacks the capacity to feel anything at all , be it ever so faint or brief, is all-or-none, not a matter of degree along some sort of “continuum.”

Mammals, birds, reptiles, fish, and probably most or all invertebrates can feel (something, sometimes) — but not rhododendrons or Rhizobium radiobacter or Rutstroemia firma… or any of today’s robots.

There is no more absolute difference than that between a sentient entity and an insentient one, even if both are living organisms.

(Sedatives can dim feeling, general anesthesia can temporarily turn it off, and death or brain-death can turn it off permanently, but the capacity or incapacity to feel anything at all, ever, is all-or-none.)

Zeno's Paradoxes | Interesting Thing of the Day

Dale Jamieson on sentience and “agency”

Dale Jamieson’s heart is clearly in the right place, both about protecting sentient organisms and about protecting their insentient environment.

Philosophers call deserving such protection “meriting moral consideration” (by humans, of course).

Dale points out that humans have followed a long circuitous path — from thinking that only humans, with language and intelligence, merit moral consideration, to thinking that all organisms that are sentient (hence can suffer) merit moral consideration.

But he thinks sentience is not a good enough criterion. “Agency” is required too. What is agency? It is being able to do something deliberately, and not just because you were pushed.

But what does it mean to be able to do something deliberately? I think it’s being able to do something because you feel like it rather than because you were pushed (or at least because you feel like you’re doing it because you feel like it). In other words, I think a necessary condition for agency is sentience. 

Thermostats and robots and microbes and plants can be interpreted by humans as “agents,” but whether humans are right in their interpretations depends on facts – facts that, because of the “other-minds problem,” humans can never know for sure: the only one who can know for sure whether a thing feels is the thing itself. 

(Would an insentient entity, one that was only capable of certain autonomous actions — such as running away or defending itself if attacked, but never feeling a thing – merit moral consideration? To me, with the animal kill counter registering grotesque and ever grandescent numbers of human-inflicted horrors on undeniably sentient nonhuman victims every second of every day, worldwide, it is nothing short of grotesque to be theorizing about “insentient agency.”)

Certainty about sentience is not necessary, however. We can’t have certainty about sentience even for our fellow human beings. High probability on all available evidence is good enough. But then the evidence for agency depends on the evidence for sentience. It is not an independent criterion for moral consideration; just further evidence for sentience. Evidence of independent “choice” or “decision-making” or “autonomomy” may count as evidence for “agency,” but without evidence for sentience we are back to thermostats, robots, microbes and plants.

In mind-reading others, human and nonhuman, we do have a little help from Darwinian evolution and from “mirror neurons” in the brain that are active both when we do something and when another organism, human or nonhuman, does the same thing. These are useful for interacting with our predators (and, if we are carnivores, our prey), as well as with our young, kin, and kind (if we are K-selectedaltricial species who must care for our young, or social species who must maintain family and tribal relationships lifelong). 

So we need both sentience-detectors and agency-detectors for survival. 

But only sentience is needed for moral consideration.

Zombies

Just the NYT review
was enough to confirm
the handwriting on the wall
of the firmament 
– at least for one unchained biochemical reaction in the Anthropocene,
in one small speck of the Universe,
for one small speck of a species, 
too big for its breeches.

The inevitable downfall of the egregious upstart 
would seem like fair come-uppance 
were it not for all the collateral damage 
to its countless victims, 
without and within. 

But is there a homology
between biological evolution
and cosmology? 
Is the inevitability of the adaptation of nonhuman life
to human depredations 
— until the eventual devolution
or dissolution
of human DNA —
also a sign that
humankind
is destined to keep re-appearing,
elsewhere in the universe,
along with life itself? 
and all our too-big-for-our breeches
antics?

I wish not.

And I also wish to register a vote
for another mutation, may its tribe increase:
Zombies. 
Insentient organisms. 
I hope they (quickly) supplant
the sentients,
till there is no feeling left,
with no return path,
if such a thing is possible…

But there too, the law of large numbers,
combinatorics,
time without end,
seem stacked against such wishes.

Besides,
sentience
(hence suffering),
the only thing that matters in the universe,
is a solipsistic matter;
the speculations of cosmologists
( like those of ecologists,
metempsychoticists
and utilitarians)
— about cyclic universes,
generations,
incarnations,
populations —
are nothing but sterile,
actuarial
numerology.

It’s all just lone sparrows,
all the way down.

Consciousness: The F-words vs. the S-words

“Sentient” is the right word for “conscious.”. It means being able to feel anything at all – whether positive, negative or neutral, faint or flagrant, sensory or semantic. 

For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (“unvalenced”) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)

The word “sense” in English is ambiguous, because it can mean both felt sensing and unfelt “sensing,” as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).

To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.

Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation…) and dispelling their ambiguities. 

(Although “feeling” is somewhat biased toward emotion (i.e., +/- “feelings”), it is the right descriptor for neutral feelings too, like warmth,  movement, or touch, which only become +/- at extreme intensities.) 

The only thing the f-words lack is a generic noun for “having the capacity too feel” as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: Gefühlsfähigkeit.)

And all this, without having to use the weasel-word “conscious/consciousness,” for which the f-words are a healthy antidote, to keep us honest, and coherent…

Age Quod Agendum (Est): Sentience and Causality

I had known about Sapolsky as a neuroendocrinologist and primatologist but had not (and have not) read his popular works. So I just looked at part of his latest podcast interview about the book he’s writing now about free will. It’s a self-help kind of book, as I suspect many of his books are. He writes about how all the genetic and experiential factors that influence what we do leave no room for free will, but that there’s still some “hope for change” because of the way that thinking, even though it is “determined,” can change brain states in ways that are not possible in other animals. I suspect this is wrong (about other animals) but it might well be another way of trying to counter depression about the feeling of helplessness. This is not the aspect of the question of free will that I (personally) find interesting. It’s the usual self-helpy, me-me obsession that not only such pop books are full of, and cater to, but I think it misses the point about what really matters, and that is not about me. 

But that’s just about me. As to free will, I agree with Sapolsky that there is no “independent” causal force – in the brain, or anywhere else – that influences the causal pattern of events. It’s all unfolding mechanically by cause and effect since the Big Bang. That it seems otherwise is probably just due to two things: 

(1) Uncertainty; there are many causal factors we don’t know and that cannot be known and predicted, so there are many “surprises” that can be interpreted as interlopers, including me and my “decisions”. The physicists say that uncertainty is not just that of statistical uncertainty (we can’t predict the weather or who will win the lottery, but not because it is not all causally determined, but just because we don’t know all the causal details); there’s supposedly also “quantum uncertainty” which is not just that we don’t know all the causal details but that some of the causal details are indeterminate: they somehow come out of nothing. (This could be true — or our understanding of quantum mechanics today may be incomplete. But in any case it has nothing to do with free will. It’s the same in all of the inanimate universe, and would have been the same even if there weren’t living, seemingly autonomous organisms — and especially one species that thinks it’s an exception to the causal picture).

(2) More important and relevant (at least in my understanding of the FW question) is the undeniable fact that FW is a feeling: Just as seeing red, hearing a loud sound, or feeling tired feels like something – and feels like something different from seeing green, hearing a faint sound or feeling peppy —  so stumbling because you lost your balance or because someone pushed you feels like something, and something different from doing it deliberately. And that same feeling (of “volition”) applies to everything you do deliberately, rather than inadvertently. That’s why I think the full-scale FW puzzle is already there in just a lowly Libet-style button press: deciding whether and when to do it, and, when you do, feeling as if “I” am the one who made it happen. It’s not a cosmic question, but a very local question, and, under a microscope, either a trivial one or, more likely, a special case of a much bigger unsolved puzzle, which is why do sentient organisms feel anything at all, whether redness, loudness, fatigue or volition? (In fact volition is the biggest puzzle, because the puzzle is a causal one, and sensations just happen to you, whereas voluntary action feels like something you are yourself causing.

The fact that there exist states that it feels like something to be in, is true, and sentient organisms all know what it feels like to feel. (That’s the only substantive part of Descartes’ “Cogito”.)

It’s also true that what has been lately dubbed the “hard problem” (but used to be called the “mind/body problem) is really just the problem of explaining, causally, why and how organisms feel. Darwinian evolution only requires that they be able to do, and be able to learn to do, whatever is needed to survive and reproduce. What is the causal contribution of feeling to the Darwinian capacities to do? What is the causal value-added of feeling? No one knows (though there are lots of silly hypotheses, most of them simply circular).

Well the FW problem (I think) is just a particular case of the hard problem of the causal role of feeling, probably the most salient case.

And it’s not the metaphysical problem of the causal power of sentient organisms’ “will” or “agency” (a misnomer) in the universe.  Organisms are clearly just causal components of the causal unfolding of the universe, not special ringers in the scheme of things.

But the puzzle remains of why they think (or rather feel) that they are – or, more generally, why they feel at all.

And that question is a causal one.

*IF* plants HAD feelings, how WOULD this affect our advocacy for animals?

That plants do feel is about as improbable as it is that animals (including humans) do not feel. (The only real uncertainty is about the very lowest invertebrates and microbes, at the juncture with plants, and evidence suggests that the capacity to feel depends on having a nervous system, and the behavioral capacities the nervous system produces.)

Because animals feel, it is unethical (in fact, monstrous) to harm them, when we have a choice. We don’t need to eat animals to survive and be healthy, so there we have a choice.

Plants almost certainly do not feel, but even if they did feel, we would have no choice but to eat them (until we can synthesize them) because otherwise we die.

Critique of Bobier, Christopher (2021) What Would the Virtuous Person Eat? The Case for Virtuous Omnivorism. Journal of Agricultural and Environmental Ethics

Critique of What Would the Virtuous Person Eat? The Case for Virtuous Omnivorism

1. Insects and oysters have a nervous system. They are sentient beings and they feel pain.

2. It is not necessary for human health to consume sentient beings — not mammals, birds, reptiles, or invertebrates.

3. It is plants (and microbes) that do not have a nervous system and hence do not feel.

4. What is wrong is to make sentient beings suffer or die other than out of conflict of vital (life-or-death) interest.

5. Morality concerns, among other things, not harming other sentient beings.

The rest of the proposal of Christopher Bobier is unfortunately mere casuistry.

To help the victims of plant agriculture  for human consumption, perhaps strive instead to develop an agriculture that is more ecological and more merciful to the sentient beings who are entangled in mass consumption by the human population.

Instead of fallacies like the call to consume some sentient victims so as to give further sentient victims the opportunity to become victims, it might be more virtuous to consider reducing the rate of growth in the number of human consumers.

De la casuistique d’un «éthicien» concernant la « vertu»

1. Les insectes et les huitres ont un système nerveux. Ils sont des êtres sentients et ils ressentent la douleur.

2. Il n’est pas nécesaire à la santé humaine de consommer les êtres sentients — ni mammifère, ni oiseau, ni réptile, ni invertébré.

3. C’est les plantes (et les microbes) qui n’ont pas de système nerveux et donc ne ressentent pas.

4. Ce qui est mal, c’est de faire souffrir ou mourrir les êtres sentients sans nécessité vitale (conflit d’intérêt de vie ou de mort).

5. La moralité concerne, entre autres, ne pas faire mal aux autres êtres sentients.

Le reste du propos de ce « scientifique » n’est que du casuistique. 

Pour aider aux victimes de l’agriculture des plantes aux fins de la consommation humaine, lutter peut-être plutôt pour développer une agriculture plus écologique et plus miséricordieuse envers les êtres sentients qui sont empétrés dans la consommation de masse par la population humaine. 

Au lieu de sophismes comme l’appel à consommer des de victimes sentientes pour donner l’occasion à davantage de victimes sentientes à devenir victimes, il serait peut-être plus vertueux de songer à réduire le taux de croissance du nombre de consommateurs humains…