Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)
The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like “intelligence,” which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.
The trouble with weasel-words like “intelligence,” is that they are vessels inviting the projection of a sentient “mind” where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a “mind”).
Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But “sensors,” whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.
And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. “Doing” is already a bit weaselly, implying some kind of “agency” or agenthood, which again invites projecting a “mind” onto it (“doing it because you feel like doing it”). But having a mind (another weasel-word, really) and having (or rather being able to be in) “mental states” really just means being able to feel (to have felt states, sentience).
And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet.
But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel.
Slime mold — amoebas that can transition between two states, single cells and multicellular — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (“smart”). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).
I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.
]]>They can also produce sounds when they are swayed by the wind, and when their fruit drops to the ground.
They can also produce sights when their leaves unfurl, and when they flower.
And they can produce scents too.
And, yes, animals can detect those sounds and sights and scents, and can use them, for their own advantage (if they eat the plant), or for mutual advantage (e.g., if they are pollinators).
Plants can also produce chemical signals, for signalling within the plant, as well as for signalling between plants.
Animals (including humans) can produce internal signals, from one part of their immune system to another, or from a part of their brain to another part, or to their muscles or their immune system.
Seismic shifts (earth tremors) can be detected by animals, and by machines.
Pheromones can be produced by human secretions and detected and reacted to (but not smelled) by other humans.
The universe is full “signals,” most of them neither detected nor produced by living organisms, plant or animal.
Both living organisms and nonliving machines can “detect” and react to signals, both internal and external signals; but only sentient organisms can feel them.
To feel signals, it is not enough to be alive and to detect and react to them; an organ of feeling is needed: a nervous system.
Nor are most of the signals produced by living organisms intentional; for a signal to be intentional, the producer has to be able to feel that it is producing it; that too requires an organ of feeling.
Stress is an internal state that signals damage in a living organism; but in an insentient organism, stress is not a felt state.
Butterflies have an organ of feeling; they are sentient.
Some species of butterfly have evolved a coloration that mimics the coloration of another, poisonous species, a signal that deters predators who have learned that it is often poisonous.
The predators feel that signal; the butterflies that produce it do not.
Evolution does not feel either; it is just an insentient mechanism by which genes that code for traits that help an organism to survive and reproduce get passed on to its progeny.
Butterflies, though sentient, do not signal their deterrent color to their predators intentionally.
Nor do plants that signal by sound, sight or scent, to themselves or others, do so intentionally.
All living organisms except plants must eat other living organisms to survive. The only exceptions are plants, who can photosynthesize with just light, CO2 and minerals.
But not all living organisms are sentient.
There is no evidence that plants are sentient, even though they are alive, and produce, detect, and react to signals.
They lack an organ of feeling, a nervous system.
Vegans need to eat to live.
But they do not need to eat organisms that feel.
Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.
Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plants. Plant, cell & environment, 25(2), 195-210.
SIGNAUX ET SENTIENCE
Oui, les plantes peuvent produire des sons lorsqu’elles sont stressées par la chaleur, la sécheresse ou les dommages.
Elles peuvent également produire des sons lorsqu’elles sont agitées par le vent et lorsque leurs fruits tombent au sol.
Elles peuvent également produire des vues lorsque leurs feuilles se déploient et lorsqu’elles fleurissent.
Et elles peuvent aussi produire des parfums.
Et, oui, les animaux peuvent détecter ces sons, ces vues, et ces odeurs, et ils peuvent les utiliser, pour leur propre avantage (s’ils mangent la plante) ou pour un avantage mutuel (s’ils sont des pollinisateurs).
Les plantes peuvent également produire des signaux chimiques, pour la signalisation à l’intérieur de la plante, ainsi que pour la signalisation entre les plantes.
Les animaux (y compris les humains) peuvent produire des signaux internes, d’une partie de leur système immunitaire à une autre, ou d’une partie de leur cerveau à une autre partie, ou à leurs muscles ou à leur système immunitaire.
Les déplacements sismiques (tremblements de terre) peuvent être détectés par les animaux ainsi que par les machines.
Les phéromones peuvent être produites par les sécrétions humaines et elles peuvent être détectées et réagies (mais non sentis) par d’autres humains.
L’univers est plein de « signaux », dont la plupart ne sont ni détectés ni produits par des organismes vivants, végétaux ou animaux.
Les organismes vivants et les machines non vivantes peuvent « détecter » et réagir aux signaux, qu’ils soient internes ou externes ; mais seuls les organismes sentients peuvent les ressentir.
Pour ressentir des signaux, il ne suffit pas d’être vivant, de les détecter et d’y réagir ; il faut un organe du ressenti : un système nerveux.
La plupart des signaux produits par les organismes vivants ne sont pas non plus intentionnels ; pour qu’un signal soit intentionnel, il faut que le producteur puisse ressentir qu’il le produit ; cela aussi exige un organe du ressenti.
Le stress est un état interne qui signale des dommages dans un organisme vivant ; mais dans un organisme non sentient, le stress n’est pas un état ressenti.
Les papillons ont un organe du ressenti ; ils sont sentient.
Certaines espèces de papillons ont évolué une coloration qui imite la coloration d’une autre espèce vénéneuse, un signal qui dissuade les prédateurs qui ont appris que c’est souvent toxique.
Les prédateurs ressentent ce signal; les papillons qui le produisent ne le ressentent pas.
L’évolution darwinienne ne ressent pas non plus ; c’est juste un mécanisme non sentient par lequel les gènes qui encodent les traits qui aident un organisme à survivre et à se reproduire sont transmis à sa progéniture.
Les papillons, bien que sentients, ne signalent pas intentionnellement leur couleur dissuasive à leurs prédateurs.
Les plantes qui signalent par le son, la vue ou l’odeur, à elles-mêmes ou aux autres, ne le font pas non plus intentionnellement.
Tous les organismes vivants, à l’exception des plantes, doivent manger d’autres organismes vivants pour survivre. Les seules exceptions sont les plantes, qui peuvent effectuer la photosynthèse avec juste de la lumière, du CO2 et des minéraux.
Mais tous les organismes vivants ne sont pas sentients.
Il n’y a pas de preuve que les plantes soient sentientes, même si elles sont vivantes, et produisent, détectent et réagissent aux signaux.
Il leur manque un organe de ressenti, un système nerveux.
Les véganes nécessitent manger pour survivre.
Mais ils ne nécessitent pas manger les organismes qui ressentent.
Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.
Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plants. Plant, cell & environment, 25(2), 195-210.
]]>For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (“unvalenced”) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)
The word “sense” in English is ambiguous, because it can mean both felt sensing and unfelt “sensing,” as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).
To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.
Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation…) and dispelling their ambiguities.
(Although “feeling” is somewhat biased toward emotion (i.e., +/- “feelings”), it is the right descriptor for neutral feelings too, like warmth, movement, or touch, which only become +/- at extreme intensities.)
The only thing the f-words lack is a generic noun for “having the capacity too feel” as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: Gefühlsfähigkeit.)
And all this, without having to use the weasel-word “conscious/consciousness,” for which the f-words are a healthy antidote, to keep us honest, and coherent…
]]>1. Computation is just the manipulation of arbitrary formal symbols, according to rules (algorithms) applied to the symbols’ shapes, not their interpretations (if any).
2. The symbol-manipulations have to be done by some sort of physical hardware, but the physical composition of the hardware is irrelevant, as long as it executes the right symbol manipulation rules.
3. Although the symbols need not be interpretable as meaning anything – there can be a Turing Machine that executes a program that is absolutely meaningless, like Hesse’s “Glass Bead Game” – but computationalists are mostly interested in interpretable algorithms that do can be given a coherent systematic interpretation by the user.
4. The Weak Church/Turing Thesis is that computation (symbol manipulation, like a Turing Machine) is what mathematicians do: symbol manipulations that are systematically interpretable as the truths and proofs of mathematics.
5. The Strong Church/Turing Thesis (SCTT) is that almost everything in the universe can be simulated (modelled) computationally.
6. A computational simulation is the execution of symbol-manipulations by hardware in which the symbols and manipulations are systematically interpretable by users as the properties of a real object in the real world (e.g., the simulation of a pendulum or an atom or a neuron or our solar system).
7. Computation can simulate only “almost” everything in the world, because — symbols and computations being digital — computer simulations of real-world objects can only be approximate. Computation is merely discrete and finite, hence it cannot encode every possible property of the real-world object. But the approximation can be tightened as closely as we wish, given enough hardware capacity and an accurate enough computational model.
8. One of the pieces of evidence for the truth of the SCTT is the fact that it is possible to connect the hardware that is doing the simulation of an object to another kind of hardware (not digital but “analog”), namely, Virtual Reality (VR) peripherals (e.g., real goggles and gloves) which are worn by real, biological human beings.
9. Hence the accuracy of a computational simulation of a coconut can be tested in two ways: (1) by systematically interpreting the symbols as the properties of a coconut and testing whether they correctly correspond to and predict the properties of a real coconut or (2) by connecting the computer simulation to a VR simulator in a pair of goggles and gloves, so that a real human being wearing them can manipulate the simulated coconut.
10. One could, of course, again on the basis of the SCTT, computationally simulate not only the coconut, but the goggles, the gloves, and the human user wearing them — but that would be just computer simulation and not VR!
11. And there we have arrived at the fundamental conflation (between computational simulation and VR) that is made by sci-fi enthusiasts (like the makers and viewers of Matrix and the like, and, apparently, David Chalmers).
12. Those who fall into this conflation have misunderstood the nature of computation (and the SCTT).
13. Nor have they understood the distinction between appearance and reality – the one that’s missed by those who, instead of just worrying that someone else might be a figment of their imagination, worry that they themselves might be a figment of someone else’s imagination.
14. Neither a computationally simulated coconut nor a VR coconot is a coconut, let alone a pumpkin in another world.
15. Computation is just semantically-interpretable symbol-manipulation (Searle’s “squiggles and squiggles”); a symbolic oracle. The symbol manipulation can be done by a computer, and the interpretation can be done in a person’s head — or it can be transmitted (causally linked) to dedicated (non-computational) hardware, such as a desk-calculator or a computer screen or to VR peripherals, allowing users’ brains to perceive them through their senses rather than just through their thoughts and language.
16. In the context of the Symbol Grounding Problem and Searle’s Chinese-Room Argument against “Strong AI,” to conflate interpretable symbols with reality is to get lost in a hermeneutic hall of mirrors. (That’s the locus of Chalmers’s “Reality.”)
Exercise for the reader: Does Turing make the same conflation in implying that everything is a Turing Machine (rather than just that everything can be simulated symbolically by a Turing Machine)?
]]>@J-Wiki:Your edits have been thoughtful, so I don’t want to dwell on quibbles. Tom Nagel’s remark is correct: No one knows how a physical state can ‘be’ a mental (i.e., felt) state. That is in fact yet another way to state the “hard problem” itself!
But to say instead “No one knows how a physical state can be or yield a mental state” is not just to state the problem, but to take a position on it, namely, the hypothesis of interactionist dualism (or something along those lines).
But this is exactly what acknowledging that it is a “hard problem” is meant to avoid. Yes, no one has any idea how a physical state could ‘be’ a mental state, but that already covers the fact that no one has any idea how a physical state could cause a mental state either!
Singling out that particular symptom of the problem and elevating it to the statement of the (hard) problem itself amounts to giving one particular hypothesis a privileged position among the many (vacuous) hypotheses that have been the symptoms of the (hard) problem itself.
By the same token one might have extended the statement of the problem to include all of the popular hypothetical (and vacuous) non-solutions: (1) the physical is identical with the mental (materialism, identity theory), (2) the physical causes the mental (parallelism), (3) the mental interacts with the physical (interactionism), (4) the physical replaces the mental (eliminativism), (5) there is no mental (physicalism), (6) there is only the mental (mentalism), (7) the mental is our only perspective on the physical (dual aspectism), (8) mental states and physical states are all just “functional” states (functionalism), etc. etc.
All these non-explanatory non-solutions are already implied by the hard problem itself. That P might (somehow) be the “cause” of M is already one of the many inchoate hypotheses opened up by admitting that we have no idea whatsoever as to how P could “be” M (or M could “be” P).
That’s why I think it would have been more NpV to use one neutral copula as the verb-of-puzzlement (“to be”) rather than a neutral one plus an arbitrary choice among the many problematic hypotheses on the market (“to yield” — i.e., “to cause”).
From glancing at (but not reading) other Wp articles you have edited, a hypothesis occurs to me: might your own PoV be influenced by quantum hypotheses…?
As for me: “Hypotheses non fingo” —User:Harnad (talk) 12:47, 10 September 2018 (UTC) —User:Harnad (talk) 13:07, 10 September 2018 (UTC) —User:Harnad (talk) 13:10, 10 September 2018 (UTC)
the “is” we use in any subject/predicate statement.)
@Biogeographist: I’m afraid I don’t know of recent, relevant writings on this question. I can only say that I find “dual-aspect” theory as unhelpful as the other 8 nonsolutions listed in paragraph 9 above (or many more that could have been mentioned). I don’t know what Prentner means by “finer temporal resolution” (though I’m pretty sure that by “tantamount” he means “paramount”). My guess is that the “is” ambiguity (“is” as stating a proposition and “is” as making an identity claim) is not really a profound matter. There is always a problem with physical to mental or mental to physical predication because of the (unsolved) “hard problem.” We do not know how (or why) feeling is generated. Dualists insist on reminding us that we don’t even know whether feeling is (physically) generated, or somehow sui generis. Timing won’t help.
(I assume that the hope is that if the physical (functional) state and the mental (felt) state don’t occur simultaneously, this will somehow help sort things out: I think it won’t. I did note once, in a Ben-Libet context (and Dan Dennett cites it in one of his books on consciousness) that it is impossible to time the exact instant of a mental event: it could precede, coincide with, or follow its physical correlate (and subjective report certainly cannot settle the matter!): there’s no objective way to pinpoint the subjective event except very approximately, not the fine-tuning Prentner seems to want. Saul Sternberg thought it could be done statisticaly, with averaging, as with event-related potentials. But I think it wouldn’t help either way. Whether feeling occurs before, during, or after a neural correlate, it does not help with the hard problem, which is a problem of causal explanation, not chronometry.) —User:Harnad (talk) 22:01, 12 September 2018 (UTC)
@Biogeographist: First, “no objective way to pinpoint the subjective event except very approximately” is not the same as no way to pinpoint it at all.
Second, the limits of human timing accuracy for detecting felt states or events are pretty well known. I can say whether it felt as if my tooth-ache started a half-second earlier or later but not whether it started a millisecond earlier or later. So the temporal boundaries of felt instants are probably too coarse for pinning them to neural correlates (which can be much finer).
Of course one can always dream of new technology, but that would still only be based on more accurate timing of objective neural events (unless you are imagining either a drug or a form of brain stimulation that increases the limits of human timing accuracy for detecting felt states, which I think is unlikely, though not inconceivable).
But even if subjective detection could be made as accurate as objective (neural) detection, how can that more accurate chronometry help with causal explanation? As I said, the felt instant could precede, coincide with or follow its neural correlate, but none of the three options helps explain how or why (or even whether) neural events cause feelings.
The causal problem is not in the timing. It’s in the functionality. Neural events can clearly (and unproblematically) cause motor as well as other physiological activity (including “information-processing”), all of which can be objectively timed and measured. No causal problem whatsoever there. Suppose some neural events turn out to slightly precede or even be exactly simultaneous with felt states (within the limits of measurement): How would that help explain how and why the felt states are felt? Even if the felt states systematically precede their neural correlates (within the limits of measurement), how does that help explain how and why felt states are felt?
That’s why I think “temporality” is not going to help solve the hard problem. I think the real problem is not in the timing of either firings or feelings. The problem is that feeling seems to be causally superfluous for causing, hence explaining, our cognitive capacities — once the “easy problems” (which are “only” about the causal mechanisms of behavior, cognitive capacity and physiology) have been solved.
Even if one imagines a scenario where the feeling precedes its neural correlate, and the neural correlate can also occur without the preceding feeling, but we can show that the neural correlate alone, without the preceding feeling, is incapable of generating some behavioral capacity (i.e., an easy problem), whereas when preceded by the feeling, it can. This sounds like the ultimate gift to the dualist: but what does it explain, causally? Nothing. It is just a just-so story, causally speaking. It leaves just as big a causal mystery as the scenario in which the neural correlate precedes or coincides with the feeling. None of this gives the slightest hint of a solution. Neither monism nor dualism solves the hard problem. It just soothes metaphysical angst, hermeneutically.
Now there could be a form of dualism that does give a causal explanation, hence a solution to the hard problem: If, in addition to the four fundamental forces of nature (gravitation, electromagnetism, and the strong and weak atomic forces) there were a fifth force which corresponded to feeling (or willing), then we would be no more entitled to ask “how and why does this fifth force pull or push” than we are to ask how and why do the other four fundamental forces pull or push. They are simply fundamental forces of nature, as described by the fundamental laws of nature, and as supported by the empirical evidence. — But the latter is exactly what feeling as a fifth force lacks. There is no empirical evidence whatsoever of a fifth fundemental force, whereas there is no end of observable, measurable evidence of the other four.
So even on the last hypothetical scenario (feeling precedes neural correlate and some behavioral capacity cannot be generated by the neural correlate alone when it is not preceded by the feeling), the “causal power” of feeling would remain a mystery: a hard problem, unsolved. The only thing I can say in favor of this fantasy scenario (which I don’t believe) is that if it did turn out to be true, it would mean that the “easy problems” cannot all be solved without the (inexplicable) help of feeling, and hence that some easy problems turn out to be hard! —User:Harnad (talk) 22:48, 13 September 2018 (UTC)
@Biogeographist: Yes, Dave Chalmers may not have written about explanation or causal explanation in relation to the hard problem, but I have. ;>) And I don’t think the hard problem has much to do with our subjective perplexities about what it is that we’re feeling: A coherent causal explanation of how and why tissue-damage — besides generating the “easy” adaptive responses (limb-withdrawal, escape, avoidance, learning, memory) — also generates “ouch” would be sufficient to solve the hard problem (without any further existential or phenomenological introspection on the meaning or quality of “ouch”). Best wishes, —User:Harnad (talk) 12:19, 14 September 2018 (UTC)
]]>