“hard problem” (consciousness) – Skywritings https://generic.wordpress.soton.ac.uk/skywritings Stevan Harnad Sat, 16 Apr 2022 10:27:41 +0000 en-GB hourly 1 https://generic.wordpress.soton.ac.uk/skywritings/wp-content/uploads/sites/287/2018/07/cropped-orang1-1-32x32.jpg “hard problem” (consciousness) – Skywritings https://generic.wordpress.soton.ac.uk/skywritings 32 32 12 Points on Confusing Virtual Reality with Reality https://generic.wordpress.soton.ac.uk/skywritings/2022/04/16/on-confusing-virtual-reality-with-reality/ https://generic.wordpress.soton.ac.uk/skywritings/2022/04/16/on-confusing-virtual-reality-with-reality/#respond Sat, 16 Apr 2022 10:13:59 +0000 https://generic.wordpress.soton.ac.uk/skywritings/?p=1715 Continue reading "12 Points on Confusing Virtual Reality with Reality"

]]>
Comments on: Bibeau-Delisle, A., & Brassard FRS, G. (2021). Probability and consequences of living inside a computer simulationProceedings of the Royal Society A477(2247), 20200658.

  1. What is Computation? it is the manipulation of arbitrarily shaped formal symbols in accordance with symbol-manipulation rules, algorithms, that operate only on the (arbitrary) shape of the symbols, not their meaning.
  2. Interpretatabililty. The only computations of interest, though, are the ones that can be given a coherent interpretation.
  3. Hardware-Independence. The hardware that executes the computation is irrelevant. The symbol manipulations have to be executed physically, so there does have to be hardware that executes it, but the physics of the hardware is irrelevant to the interpretability of the software it is executing. It’s just symbol-manipulations. It could have been done with pencil and paper.
  4. What is the Weak Church/Turing Thesis? That what mathematicians are doing is computation: formal symbol manipulation, executable by a Turing machine – finite-state hardware that can read, write, advance tape, change state or halt.
  5. What is Simulation? It is computation that is interpretable as modelling properties of the real world: size, shape, movement, temperature, dynamics, etc. But it’s still only computation: coherently interpretable manipulation of symbols
  6. What is the Strong Church/Turing Thesis? That computation can simulate (i.e., model) just about anything in the world to as close an approximation as desired (if you can find the right algorithm). It is possible to simulate a real rocket as well as the physical environment of a real rocket. If the simulation is a close enough approximation to the properties of a real rocket and its environment, it can be manipulated computationally to design and test new, improved rocket designs. If the improved design works in the simulation, then it can be used as the blueprint for designing a real rocket that applies the new design in the real world, with real material, and it works.
  7. What is Reality? It is the real world of objects we can see and measure.
  8. What is Virtual Reality (VR)? Devices that can stimulate (fool) the human senses by transmitting the output of simulations of real objects to virtual-reality gloves and goggles. For example, VR can transmit the output of the simulation of an ice cube, melting, to gloves and goggles that make you feel you are seeing and feeling an ice cube. melting. But there is no ice-cube and no melting; just symbol manipulations interpretable as an ice-cube, melting.
  9. What is Certainly Truee (rather than just highly probably true on all available evidence)? only what is provably true in formal mathematics. Provable means necessarily true, on pain of contradiction with formal premises (axioms). Everything else that is true is not provably true (hence not necessarily true), just probably true.
  10.  What is illusion? Whatever fools the senses. There is no way to be certain that what our senses and measuring instruments tell us is true (because it cannot be proved formally to be necessarily true, on pain of contradiction). But almost-certain on all the evidence is good enough, for both ordinary life and science.
  11. Being a Figment? To understand the difference between a sensory illusion and reality is perhaps the most basic insight that anyone can have: the difference between what I see and what is really there. “What I am seeing could be a figment of my imagination.” But to imagine that what is really there could be a computer simulation of which I myself am a part  (i.e., symbols manipulated by computer hardware, symbols that are interpretable as the reality I am seeing, as if I were in a VR) is to imagine that the figment could be the reality – which is simply incoherent, circular, self-referential nonsense.
  12.  Hermeneutics. Those who think this way have become lost in the “hermeneutic hall of mirrors,” mistaking symbols that are interpretable (by their real minds and real senses) as reflections of themselves — as being their real selves; mistaking the simulated ice-cube, for a “real” ice-cube.
]]>
https://generic.wordpress.soton.ac.uk/skywritings/2022/04/16/on-confusing-virtual-reality-with-reality/feed/ 0
Learning and Feeling https://generic.wordpress.soton.ac.uk/skywritings/2022/04/08/learning-and-feeling/ https://generic.wordpress.soton.ac.uk/skywritings/2022/04/08/learning-and-feeling/#respond Fri, 08 Apr 2022 11:31:15 +0000 https://generic.wordpress.soton.ac.uk/skywritings/?p=1712 Continue reading "Learning and Feeling"

]]>
Re: the  NOVA/PBS video on slime mold. 

Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)

The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like “intelligence,” which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.

The trouble with weasel-words like “intelligence,” is that they are vessels inviting the projection of a sentient “mind” where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a “mind”). 

Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But “sensors,” whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.

And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. “Doing” is already a bit weaselly, implying some kind of “agency” or agenthood, which again invites projecting a “mind” onto it (“doing it because you feel like doing it”). But having a mind (another weasel-word, really) and having (or rather being able to be in) “mental states” really just means being able to feel (to have felt states, sentience).

And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet. 

But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel. 

Slime mold — amoebas that can transition between two states, single cells and multicellular  — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (“smart”). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).

I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.

]]>
https://generic.wordpress.soton.ac.uk/skywritings/2022/04/08/learning-and-feeling/feed/ 0
SIGNALS AND SENTIENCE https://generic.wordpress.soton.ac.uk/skywritings/2022/04/05/signals-and-sentience/ https://generic.wordpress.soton.ac.uk/skywritings/2022/04/05/signals-and-sentience/#respond Tue, 05 Apr 2022 13:09:56 +0000 https://generic.wordpress.soton.ac.uk/skywritings/?p=1707 Continue reading "SIGNALS AND SENTIENCE"

]]>
Yes, plants can produce sounds when they are stressed by heat or drought or damage.

They can also produce sounds when they are swayed by the wind, and when their fruit drops to the ground.

They can also produce sights when their leaves unfurl, and when they flower.

And they can produce scents too.

And, yes, animals can detect those sounds and sights and scents, and can use them, for their own advantage (if they eat the plant), or for mutual advantage (e.g., if they are pollinators).

Plants can also produce chemical signals, for signalling within the plant, as well as for signalling between plants.

Animals (including humans) can produce internal signals, from one part of their immune system to another, or from a part of their brain to another part, or to their muscles or their immune system.

Seismic shifts (earth tremors) can be detected by animals, and by machines.

Pheromones can be produced by human secretions and detected and reacted to (but not smelled) by other humans.

The universe is full “signals,” most of them neither detected nor produced by living organisms, plant or animal.

Both living organisms and nonliving machines can “detect” and react to signals, both internal and external signals; but only sentient organisms can feel them. 

To feel signals, it is not enough to be alive and to detect and react to them; an organ of feeling is needed: a nervous system.

Nor are most of the signals produced by living organisms intentional; for a signal to be intentional, the producer has to be able to feel that it is producing it; that too requires an organ of feeling.

Stress is an internal state that signals damage in a living organism; but in an insentient organism, stress is not a felt state.

Butterflies have an organ of feeling; they are sentient. 

Some species of butterfly have evolved a coloration that mimics the coloration of another, poisonous species, a signal that deters predators who have learned that it is often poisonous. 

The predators feel that signal; the butterflies that produce it do not.

Evolution does not feel either; it is just an insentient mechanism by which genes that code for traits that help an organism to survive and reproduce get passed on to its progeny.

Butterflies, though sentient, do not signal their deterrent color to their predators intentionally.

Nor do plants that signal by sound, sight or scent, to themselves or others, do so intentionally.

All living organisms except plants must eat other living organisms to survive. The only exceptions are plants, who can photosynthesize with just light, CO2 and minerals.

But not all living organisms are sentient.

There is no evidence that plants are sentient, even though they are alive, and produce, detect, and react to signals. 

They lack an organ of feeling, a nervous system.

Vegans need to eat to live.

But they do not need to eat organisms that feel.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

SIGNAUX ET SENTIENCE

Oui, les plantes peuvent produire des sons lorsqu’elles sont stressées par la chaleur, la sécheresse ou les dommages.

Elles peuvent également produire des sons lorsqu’elles sont agitées par le vent et lorsque leurs fruits tombent au sol.

Elles peuvent également produire des vues lorsque leurs feuilles se déploient et lorsqu’elles fleurissent.

Et elles peuvent aussi produire des parfums.

Et, oui, les animaux peuvent détecter ces sons, ces vues, et ces odeurs, et ils peuvent les utiliser, pour leur propre avantage (s’ils mangent la plante) ou pour un avantage mutuel (s’ils sont des pollinisateurs).

Les plantes peuvent également produire des signaux chimiques, pour la signalisation à l’intérieur de la plante, ainsi que pour la signalisation entre les plantes.

Les animaux (y compris les humains) peuvent produire des signaux internes, d’une partie de leur système immunitaire à une autre, ou d’une partie de leur cerveau à une autre partie, ou à leurs muscles ou à leur système immunitaire.

Les déplacements sismiques (tremblements de terre) peuvent être détectés par les animaux ainsi que par les machines.

Les phéromones peuvent être produites par les sécrétions humaines et elles peuvent être détectées et réagies (mais non sentis) par d’autres humains.

L’univers est plein de « signaux », dont la plupart ne sont ni détectés ni produits par des organismes vivants, végétaux ou animaux.

Les organismes vivants et les machines non vivantes peuvent « détecter » et réagir aux signaux, qu’ils soient internes ou externes ; mais seuls les organismes sentients peuvent les ressentir.

Pour ressentir des signaux, il ne suffit pas d’être vivant, de les détecter et d’y réagir ; il faut un organe du ressenti : un système nerveux.

La plupart des signaux produits par les organismes vivants ne sont pas non plus intentionnels ; pour qu’un signal soit intentionnel, il faut que le producteur puisse ressentir qu’il le produit ; cela aussi exige un organe du ressenti.

Le stress est un état interne qui signale des dommages dans un organisme vivant ; mais dans un organisme non sentient, le stress n’est pas un état ressenti.

Les papillons ont un organe du ressenti ; ils sont sentient.

Certaines espèces de papillons ont évolué une coloration qui imite la coloration d’une autre espèce vénéneuse, un signal qui dissuade les prédateurs qui ont appris que c’est souvent toxique.

Les prédateurs ressentent ce signal; les papillons qui le produisent ne le ressentent pas.

L’évolution darwinienne ne ressent pas non plus ; c’est juste un mécanisme non sentient par lequel les gènes qui encodent les traits qui aident un organisme à survivre et à se reproduire sont transmis à sa progéniture.

Les papillons, bien que sentients, ne signalent pas intentionnellement leur couleur dissuasive à leurs prédateurs.

Les plantes qui signalent par le son, la vue ou l’odeur, à elles-mêmes ou aux autres, ne le font pas non plus intentionnellement.

Tous les organismes vivants, à l’exception des plantes, doivent manger d’autres organismes vivants pour survivre. Les seules exceptions sont les plantes, qui peuvent effectuer la photosynthèse avec juste de la lumière, du CO2 et des minéraux.

Mais tous les organismes vivants ne sont pas sentients.

Il n’y a pas de preuve que les plantes soient sentientes, même si elles sont vivantes, et produisent, détectent et réagissent aux signaux.

Il leur manque un organe de ressenti, un système nerveux.

Les véganes nécessitent manger pour survivre.

Mais ils ne nécessitent pas manger les organismes qui ressentent.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

]]>
https://generic.wordpress.soton.ac.uk/skywritings/2022/04/05/signals-and-sentience/feed/ 0
Consciousness: The F-words vs. the S-words https://generic.wordpress.soton.ac.uk/skywritings/2022/01/20/consciousness-the-f-words-vs-the-s-words/ https://generic.wordpress.soton.ac.uk/skywritings/2022/01/20/consciousness-the-f-words-vs-the-s-words/#respond Thu, 20 Jan 2022 12:36:35 +0000 https://generic.wordpress.soton.ac.uk/skywritings/?p=1623 Continue reading "Consciousness: The F-words vs. the S-words"

]]>
“Sentient” is the right word for “conscious.”. It means being able to feel anything at all – whether positive, negative or neutral, faint or flagrant, sensory or semantic. 

For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (“unvalenced”) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)

The word “sense” in English is ambiguous, because it can mean both felt sensing and unfelt “sensing,” as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).

To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.

Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation…) and dispelling their ambiguities. 

(Although “feeling” is somewhat biased toward emotion (i.e., +/- “feelings”), it is the right descriptor for neutral feelings too, like warmth,  movement, or touch, which only become +/- at extreme intensities.) 

The only thing the f-words lack is a generic noun for “having the capacity too feel” as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: Gefühlsfähigkeit.)

And all this, without having to use the weasel-word “conscious/consciousness,” for which the f-words are a healthy antidote, to keep us honest, and coherent…

]]>
https://generic.wordpress.soton.ac.uk/skywritings/2022/01/20/consciousness-the-f-words-vs-the-s-words/feed/ 0
Appearance and Reality https://generic.wordpress.soton.ac.uk/skywritings/2022/01/03/appearance-and-reality/ https://generic.wordpress.soton.ac.uk/skywritings/2022/01/03/appearance-and-reality/#respond Mon, 03 Jan 2022 12:29:40 +0000 https://generic.wordpress.soton.ac.uk/skywritings/?p=1598 Continue reading "Appearance and Reality"

]]>
Re: https://www.nytimes.com/interactive/2021/12/13/magazine/david-j-chalmers-interview.html

1. Computation is just the manipulation of arbitrary formal symbols, according to rules (algorithms) applied to the symbols’ shapes, not their interpretations (if any).

2. The symbol-manipulations have to be done by some sort of physical hardware, but the physical composition of the hardware is irrelevant, as long as it executes the right symbol manipulation rules.

3. Although the symbols need not be interpretable as meaning anything – there can be a Turing Machine that executes a program that is absolutely meaningless, like Hesse’s “Glass Bead Game” – but computationalists are  mostly interested in interpretable algorithms that do can be given a coherent systematic interpretation by the user.

4. The Weak Church/Turing Thesis is that computation (symbol manipulation, like a Turing Machine) is what mathematicians do: symbol manipulations that are systematically interpretable as the truths  and proofs of mathematics.

5. The Strong Church/Turing Thesis (SCTT)  is that almost everything in the universe can be simulated (modelled) computationally.

6. A computational simulation is the execution of symbol-manipulations by hardware in which the symbols and manipulations are systematically interpretable by users as the properties of a real object in the real world (e.g., the simulation of a pendulum or an atom or a neuron or our solar system).

7. Computation can simulate only “almost” everything in the world, because  — symbols and computations being digital — computer simulations of real-world objects can only be approximate. Computation is merely discrete and finite, hence it cannot encode every possible property of the real-world object. But the approximation can be tightened as closely as we wish, given enough hardware capacity and an accurate enough computational model.

8. One of the pieces of evidence for the truth of the SCTT is the fact that it is possible to connect the hardware that is doing the simulation of an object to another kind of hardware (not digital but “analog”), namely, Virtual Reality (VR) peripherals (e.g., real goggles and gloves) which are worn by real, biological human beings.

9. Hence the accuracy of a computational simulation of a coconut can be tested in two ways: (1) by systematically interpreting the symbols as the properties of a coconut and testing whether they correctly correspond to and predict the properties of a real coconut or (2) by connecting the computer simulation to a VR simulator in a pair of goggles and gloves, so that a real human being wearing them can manipulate the simulated coconut.

10. One could, of course, again on the basis of the SCTT, computationally simulate not only the coconut, but the goggles, the gloves, and the human user wearing them — but that would be just computer simulation and not VR!

11. And there we have arrived at the fundamental conflation (between computational simulation and VR) that is made by sci-fi enthusiasts (like the makers and viewers of Matrix and the like, and, apparently, David Chalmers). 

12. Those who fall into this conflation have misunderstood the nature of computation (and the SCTT).

13.  Nor have they understood the distinction between appearance and reality – the one that’s missed by those who, instead of just worrying that someone else might be a figment of their imagination, worry that they themselves might be a figment of someone else’s imagination.

14. Neither a computationally simulated coconut nor a VR coconot is a coconut, let alone a pumpkin in another world.

15. Computation is just semantically-interpretable symbol-manipulation (Searle’s “squiggles and squiggles”); a symbolic oracle. The symbol manipulation can be done by a computer, and the interpretation can be done in a person’s head or it can be transmitted (causally linked) to dedicated (non-computational) hardware, such as a desk-calculator or a computer screen or to VR peripherals, allowing users’ brains to perceive them through their senses rather than just through their thoughts and language.

16. In the context of the Symbol Grounding Problem and Searle’s Chinese-Room Argument against “Strong AI,” to conflate interpretable symbols with reality is to get lost in a hermeneutic hall of mirrors. (That’s the locus of Chalmers’s “Reality.”)

Exercise for the reader: Does Turing make the same conflation in implying that everything is a Turing Machine (rather than just that everything can be simulated symbolically by a Turing Machine)?

]]>
https://generic.wordpress.soton.ac.uk/skywritings/2022/01/03/appearance-and-reality/feed/ 0
Wikipedia Talk on the Hard Problem of Consciousness https://generic.wordpress.soton.ac.uk/skywritings/2018/09/14/wikipedia-talk-on-the-hard-problem-of-consciousness/ https://generic.wordpress.soton.ac.uk/skywritings/2018/09/14/wikipedia-talk-on-the-hard-problem-of-consciousness/#respond Fri, 14 Sep 2018 12:23:08 +0000 http://generic.wordpress.soton.ac.uk/skywritings/?p=65 Continue reading "Wikipedia Talk on the Hard Problem of Consciousness"

]]>

@J-Wiki:Your edits have been thoughtful, so I don’t want to dwell on quibbles. Tom Nagel’s remark is correct: No one knows how a physical state can ‘be’ a mental (i.e., felt) state. That is in fact yet another way to state the “hard problem” itself!

But to say instead “No one knows how a physical state can be or yield a mental state” is not just to state the problem, but to take a position on it, namely, the hypothesis of interactionist dualism (or something along those lines).

My edit was intended to avoid having the article take a position, by adding the possibility of the dualist solution (“yield”) to the possibility of a monist solution (“be”). This is in keeping with Chalmer’s original statement of the “hard problem”:
It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
He uses the verb “to arise”, which Merriam-Webster defines as “to begin to occur or to exist”. Of course, the same word could be used in the introduction, but ot wouldn’t be the best writing style.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

But this is exactly what acknowledging that it is a “hard problem” is meant to avoid. Yes, no one has any idea how a physical state could ‘be’ a mental state, but that already covers the fact that no one has any idea how a physical state could cause a mental state either!

The inability to be something does not necessarily preclude the ability to cause something.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

Singling out that particular symptom of the problem and elevating it to the statement of the (hard) problem itself amounts to giving one particular hypothesis a privileged position among the many (vacuous) hypotheses that have been the symptoms of the (hard) problem itself.

By the same token one might have extended the statement of the problem to include all of the popular hypothetical (and vacuous) non-solutions: (1) the physical is identical with the mental (materialism, identity theory), (2) the physical causes the mental (parallelism), (3) the mental interacts with the physical (interactionism), (4) the physical replaces the mental (eliminativism), (5) there is no mental (physicalism), (6) there is only the mental (mentalism), (7) the mental is our only perspective on the physical (dual aspectism), (8) mental states and physical states are all just “functional” states (functionalism), etc. etc.

All these non-explanatory non-solutions are already implied by the hard problem itself. That P might (somehow) be the “cause” of M is already one of the many inchoate hypotheses opened up by admitting that we have no idea whatsoever as to how P could “be” M (or M could “be” P).

Yes, there are other theoretical possibilities, but Chalmers didn’t consider them popular enough to mention. Therefore, the article, in summarizing the topic, shouldn’t.
Also, the recognition that there is a “hard problem” of consciousness does not imply that the solution must be monist. It is intended to make clear that finding physical explanations of even all of the “easy problems” will not solve the “hard problem”.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

That’s why I think it would have been more NpV to use one neutral copula as the verb-of-puzzlement (“to be”) rather than a neutral one plus an arbitrary choice among the many problematic hypotheses on the market (“to yield” — i.e., “to cause”).

As explained above, I don’t see the verb “to be” as neutral in this case.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

From glancing at (but not reading) other Wp articles you have edited, a hypothesis occurs to me: might your own PoV be influenced by quantum hypotheses…?

Which PoV would that be?J-Wiki (talk) 06:35, 11 September 2018 (UTC)

As for me: “Hypotheses non fingo” —User:Harnad (talk) 12:47, 10 September 2018 (UTC) —User:Harnad (talk) 13:07, 10 September 2018 (UTC) —User:Harnad (talk) 13:10, 10 September 2018 (UTC)

Thank you for your comments.
Please see my replies interspersed above.
The essence of this discussion should be copied to the article’s talk page, so that others can benefit from the discussion for the purpose of editing the article.
J-Wiki (talk) 06:35, 11 September 2018 (UTC)
As you request, I will copy this exchange to the talk page of Hard problem of consciousness. Not to extend this side-discussion too long I will just make two replies:
(1) The hard problem is a problem of explanation: “how” and “why” do organisms feel rather than just function (insentiently, like machines)? Dave Chalmers did not claim to invent the hard problem, just to name it and point out that it is hard to explain how and why organisms feel rather than just function. Scientific explanation is not normally a metaphysical matter. “Monism” vs. “Dualism” (i.e., is there one kind of “stuff” in the universe [“physical”] or two kinds of “stuff” [“physical” and “mental”]?) is a metaphysical distinction, which in turns implies that the hard problem is hard for metaphysical reasons. But that is only one of many possible reasons why the hard problem is hard. Metaphysical dualism should not be given a privileged status among the myriad conceivable solutions to the hard problem, and it certainly should not be part of the definition of the hard problem. I think it is a mistake — and misleading to WP readers — to make metaphysical conjectures part of the definition of the hard problem. It is clear that some internal states of organisms are unfelt states and other internal states are felt states. The copula “are” here is not a metaphysical “are.” It takes no sides on how or why some internal states are felt. It just states that they are, and that explaining how or why this is the case is hard. The copula “is” is not a statement of the “identity theory” or any other mataphysical conjecture or scientific hypothesis. It is just the “is” we use in any subject/predicate statement.
(2) What PoV might someone interested in quantum mechanical puzzles unwittingly import into their view of the hard problem? Well, wave/particle ‘duality,” for a start…–User:Harnad (talk) 14:13, 11 September 2018 (UTC)
@Harnad: I’ll respond here instead of on the talk page where I saw this discussion since my response is not related to editing Hard problem of consciousness. One problem with many philosophical discussions of “mental states” (including the discussion above) is that the timing of such “mental states” is poorly specified. Philosopher Robert Prentner, for example, recently argued that the hard problem needs to be reframed with finer temporal specification; he wrote:
(Notice, incidentally, how Prentner’s mention of Whitehead’s “subject-predicate dogma” relates to your statement above: the “is” we use in any subject/predicate statement.)
If you or J-Wiki know of any interesting recent discussions of the hard problem that address temporality on multiple scales as Prentner alluded to, I would be grateful if you could share relevant citations. Biogeographist (talk) 19:11, 11 September 2018 (UTC)

@Biogeographist: I’m afraid I don’t know of recent, relevant writings on this question. I can only say that I find “dual-aspect” theory as unhelpful as the other 8 nonsolutions listed in paragraph 9 above (or many more that could have been mentioned). I don’t know what Prentner means by “finer temporal resolution” (though I’m pretty sure that by “tantamount” he means “paramount”). My guess is that the “is” ambiguity (“is” as stating a proposition and “is” as making an identity claim) is not really a profound matter. There is always a problem with physical to mental or mental to physical predication because of the (unsolved) “hard problem.” We do not know how (or why) feeling is generated. Dualists insist on reminding us that we don’t even know whether feeling is (physically) generated, or somehow sui generis. Timing won’t help.

(I assume that the hope is that if the physical (functional) state and the mental (felt) state don’t occur simultaneously, this will somehow help sort things out: I think it won’t. I did note once, in a Ben-Libet context (and Dan Dennett cites it in one of his books on consciousness) that it is impossible to time the exact instant of a mental event: it could precede, coincide with, or follow its physical correlate (and subjective report certainly cannot settle the matter!): there’s no objective way to pinpoint the subjective event except very approximately, not the fine-tuning Prentner seems to want. Saul Sternberg thought it could be done statisticaly, with averaging, as with event-related potentials. But I think it wouldn’t help either way. Whether feeling occurs before, during, or after a neural correlate, it does not help with the hard problem, which is a problem of causal explanation, not chronometry.) —User:Harnad (talk) 22:01, 12 September 2018 (UTC)

@Harnad: Thanks for the response. I’m familiar with Libet and some related discussions but all of that is quite old at this point. I suspect there is a role for chronometry in further elucidation of the hard problem but it will require, among other things, further experimental research (as always) and technological development. If “there’s no objective way to pinpoint the subjective event” then what is the warrant for even speaking of a “felt state” in contrast to an “unfelt state” (as the lead of Hard problem of consciousness does) when a “state” can’t even be pinpointed in phenomenal experience? (This is a question about the appropriateness of “state” as a common denominator between felt and unfelt, or subjective and objective, rather than about the hard problem in general. It’s also a rhetorical question: I don’t expect that there is a good response.) Biogeographist (talk) 15:23, 13 September 2018 (UTC)

@Biogeographist: First, “no objective way to pinpoint the subjective event except very approximately” is not the same as no way to pinpoint it at all.

Second, the limits of human timing accuracy for detecting felt states or events are pretty well known. I can say whether it felt as if my tooth-ache started a half-second earlier or later but not whether it started a millisecond earlier or later. So the temporal boundaries of felt instants are probably too coarse for pinning them to neural correlates (which can be much finer).

Of course one can always dream of new technology, but that would still only be based on more accurate timing of objective neural events (unless you are imagining either a drug or a form of brain stimulation that increases the limits of human timing accuracy for detecting felt states, which I think is unlikely, though not inconceivable).

But even if subjective detection could be made as accurate as objective (neural) detection, how can that more accurate chronometry help with causal explanation? As I said, the felt instant could precede, coincide with or follow its neural correlate, but none of the three options helps explain how or why (or even whether) neural events cause feelings.

The causal problem is not in the timing. It’s in the functionality. Neural events can clearly (and unproblematically) cause motor as well as other physiological activity (including “information-processing”), all of which can be objectively timed and measured. No causal problem whatsoever there. Suppose some neural events turn out to slightly precede or even be exactly simultaneous with felt states (within the limits of measurement): How would that help explain how and why the felt states are felt? Even if the felt states systematically precede their neural correlates (within the limits of measurement), how does that help explain how and why felt states are felt?

That’s why I think “temporality” is not going to help solve the hard problem. I think the real problem is not in the timing of either firings or feelings. The problem is that feeling seems to be causally superfluous for causing, hence explaining, our cognitive capacities — once the “easy problems” (which are “only” about the causal mechanisms of behavior, cognitive capacity and physiology) have been solved.

Even if one imagines a scenario where the feeling precedes its neural correlate, and the neural correlate can also occur without the preceding feeling, but we can show that the neural correlate alone, without the preceding feeling, is incapable of generating some behavioral capacity (i.e., an easy problem), whereas when preceded by the feeling, it can. This sounds like the ultimate gift to the dualist: but what does it explain, causally? Nothing. It is just a just-so story, causally speaking. It leaves just as big a causal mystery as the scenario in which the neural correlate precedes or coincides with the feeling. None of this gives the slightest hint of a solution. Neither monism nor dualism solves the hard problem. It just soothes metaphysical angst, hermeneutically.

Now there could be a form of dualism that does give a causal explanation, hence a solution to the hard problem: If, in addition to the four fundamental forces of nature (gravitation, electromagnetism, and the strong and weak atomic forces) there were a fifth force which corresponded to feeling (or willing), then we would be no more entitled to ask “how and why does this fifth force pull or push” than we are to ask how and why do the other four fundamental forces pull or push. They are simply fundamental forces of nature, as described by the fundamental laws of nature, and as supported by the empirical evidence. — But the latter is exactly what feeling as a fifth force lacks. There is no empirical evidence whatsoever of a fifth fundemental force, whereas there is no end of observable, measurable evidence of the other four.

So even on the last hypothetical scenario (feeling precedes neural correlate and some behavioral capacity cannot be generated by the neural correlate alone when it is not preceded by the feeling), the “causal power” of feeling would remain a mystery: a hard problem, unsolved. The only thing I can say in favor of this fantasy scenario (which I don’t believe) is that if it did turn out to be true, it would mean that the “easy problems” cannot all be solved without the (inexplicable) help of feeling, and hence that some easy problems turn out to be hard! —User:Harnad (talk) 22:48, 13 September 2018 (UTC)

@Harnad: Thanks again for the response. It’s very interesting to read how you think. You went in a direction I wouldn’t have anticipated: I’m not sure where your sudden emphasis on “causal explanation” in your comment above comes from. (Perhaps it came from Prentner’s article, but I think he brings up causal explanation just to point toward the alternative of mereological explanation.) The word “causal” doesn’t appear in the Hard problem of consciousness article, and “not all scientific explanations work by describing causal connections between events or the world’s overall causal structure” (from: Lange, Marc (2017), Because without cause: non-causal explanation in science and mathematics, Oxford studies in philosophy of science, Oxford; New York: Oxford University Press, doi:10.1093/acprof:oso/9780190269487.001.0001, ISBN 9780190269487, OCLC 956379140). I don’t see why responses to the hard problem aimed at explaining the relationship between phenomenal experience and objective data, or at explaining the very nature of the phenomenal, would require a causal explanation. So I don’t think that “how can that more accurate chronometry help with causalexplanation?” is the right question, but I’ll take the question with the word “causal” omitted. And I’ll admit that I don’t have a good answer, but I’ll keep thinking about it. I can say that I often don’t know very well exactly what it is that I am feeling, and I’m familiar enough with the psychotherapy research literature to know that I’m not alone—this is common even among relatively normal people (which includes me, of course), as philosopher Eric Schwitzgebel has also emphasized (e.g., Schwitzgebel, Eric (2011), Perplexities of consciousness, Life and mind, Cambridge, MA: MIT Press, ISBN 9780262014908, OCLC 608687514). So that adds another explanatory target for the hard problem: not just why and how people (OK, sentient beings) feel, but also why and how they don’t know what they feel. And I suspect there is a role to play for a better understanding of “temporal relations” (Prentner) and of many other things. I’ll happily concede that it may not help with causal explanation because, as I said, I don’t see how causal explanation is required. Biogeographist (talk) 01:18, 14 September 2018 (UTC)

@Biogeographist: Yes, Dave Chalmers may not have written about explanation or causal explanation in relation to the hard problem, but I have. ;>) And I don’t think the hard problem has much to do with our subjective perplexities about what it is that we’re feeling: A coherent causal explanation of how and why tissue-damage — besides generating the “easy” adaptive responses (limb-withdrawal, escape, avoidance, learning, memory) — also generates “ouch” would be sufficient to solve the hard problem (without any further existential or phenomenological introspection on the meaning or quality of “ouch”). Best wishes, —User:Harnad (talk) 12:19, 14 September 2018 (UTC)

]]>
https://generic.wordpress.soton.ac.uk/skywritings/2018/09/14/wikipedia-talk-on-the-hard-problem-of-consciousness/feed/ 0