{"id":292,"date":"2018-12-25T16:02:20","date_gmt":"2018-12-25T16:02:20","guid":{"rendered":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/?p=292"},"modified":"2018-12-25T16:02:20","modified_gmt":"2018-12-25T16:02:20","slug":"292","status":"publish","type":"post","link":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2018\/12\/25\/292\/","title":{"rendered":""},"content":{"rendered":"
<\/p>\n
Arnold Trehub<\/a> wrote: “brain analogs… are much more informative than mere correlates”<\/i><\/b><\/p><\/blockquote>\n
I am going to think out loud about the possibility of “duals<\/a>” here, because I am not really sure yet what implication I want to draw from them for the question of psychophysical “analogs” vs “correlates.”<\/p>\n
The question is interesting (and Saul Kripke<\/a> gave it some thought in the ’70s when he expressed some skepticism about the coherence, hence the very possibility, of the notion of “spectrum inversion<\/a>“: Could you and I really use exactly the same language, indistinguishably, and live and interact indistinguishably in the world, while (unbeknownst to us) green looks (i.e., feels) to me the way red does to you, and vice versa?<\/p>\n
Kripke thought the answer was no, because with that simple swap would come an infinity of other associated similarity relations, all of which would likewise have to be systematically adjusted to preserve the coherence of what we say as well as do in the world. (“Green” looks more like blue, “red” looks more like purple, etc.)<\/p>\n
At the time, I agreed, because I had come to much the same conclusion about semantic swapping: Would a book still be systematically interpretable if every token of “less” were interpreted to mean “more” and vice versa? (I don’t mean just making a swap between the two arbitrary terms we use, but between their intended meanings, while preserving the usage of the terms exactly as they are used now.)<\/p>\n
I was pretty sure that the swap would run into detectable trouble quickly for the simple reason that “less” and “more” are not formal “duals” the way some terms and operations are in mathematics and logic. My intuition — though I could not prove it — was that almost all seemingly local pairwise swaps like less\/more would eventually require systematic swaps of countless other opposing or contradictory or dependent terms (“I prefer\/disprefer having less\/more money…”), eventually even true\/false, and that standard English could not bear the weight of such a pervasive semantic swap and still yield a coherent systematic interpretation of all of our verbal discourse. And that’s even before we ask whether the semantic swap could also preserve the coherence between our verbal discourse and our actions in the world.<\/p>\n
But since then I’ve come to a more radical view about meaning itself, according to which the only difference between a text (a string of symbols P instantiated in a static book or a dynamic computer) that is systematically interpretable as meaning something, but has no “intrinsic intentionality” (in Searle<\/a>‘s sense) and a text (say, a string of symbols P instantiated in the brain of a conscious person thinking the thought that P) is that it feels like something to be the person thinking the thought that P, whereas it feels like nothing to be the book or the computer instantiating the symbols string. Systematic interpretability (“meaningfulness”) in both cases, but (intrinsic) meaning only in the (felt) one.<\/p>\n
I further distinguish meaning, in this felt sense, from mere grounding<\/a>, which is yet another property that a mere book or computer lacks: Only a robot that could pass the robotic Turing Test<\/a> (TT; the capacity to speak and act in the real world, indistinguishably from<\/i> a person to<\/i> a person, for a lifetime) would have grounded symbols. But if the robot did not feel, it still would not have symbols with intrinsic “intentionality”; it would still be more like a book or computer, whose sentences are systematically interpretable but mean nothing except in the mind of a conscious (i.e., feeling) user. (It is of course an open and completely undecidable question whether a TT-passing robot would or would not actually feel, because of the other-minds problem. I think it would — but I have no idea how or why!)<\/p>\n
But this radical equation of intrinsic meaning (as opposed to mere systematic interpretability) with feeling would make Kripke’s observations about color-swapping (i.e., feeling-swapping) and my observations about meaning-swapping into one and the same thing.<\/p>\n
It is not only that verbal descriptions fall short of feelings in the way that verbal descriptions fall short of pictures, but that feelings (say, feelings of greater or lesser intensity) and whatever the feelings are “about” (in the sense that they are caused by them and they somehow appertain to them) are incommensurable: The relation between an increase in a physical property and its felt quality (e.g., an increase in physical intensity and a felt increase in intensity) is a systematic (and potentially very elaborate and complicated) correlation (more with more and less with less), but does it even make sense to say it is a “resemblance”?<\/p>\n
For this reason, brain “analogs” too are just systematic correlates insofar as felt quality is concerned. I may have (1) a neuron in my brain whose intensity (or frequency) of firing is in direct proportion to (2) the intensity of an external stimulus (say, the amplitude of a sinusoid at 440 hz). In addition, there is the usual log-linear psychophysical relationship between the stimulus intensity (2) and (3) my ratings of (felt) intensity. The stimulus intensity (2) and the neuronal intensity (1) are clearly in an analog relationship. So are the stimulus intensity (2) and my intensity ratings (3) (as rated on a 1-10 scale, say). And so are the neuronal intensity (1) and my intensity ratings (3). But you could get all three of those measurements, hence all three of those correlations, out of an unfeeling robot. (I could build one already today.) How does (4) the actual feeling of the intensity figure in all this?<\/p>\n
You want to say that my intensity ratings are based upon an “analog” of that felt intensity. Higher rated intensity is systematically correlated with higher felt intensity, and lower rated intensity is correlated with lower felt intensity. But in what way does a higher intensity rating RESEMBLE a higher intensity feeling? Is the rating not just a notational convention I use, like saying that “higher” sound-frequencies are “higher”? (They’re not really higher, like higher in the sky, are they?) (Same thing is true if I instead use the “analog” convention of matching the felt frequency with how high I raise my hand. And if it’s instead an involuntary reflex rather than a voluntary convention that is causing the analog response — say, light pupillary dilation in response to increased light intensity — then the correlated feeling is even more side-lined!)<\/p>\n
The members of our species (almost certainly) all share roughly the same feelings. So we can agree upon, share and understand naming conventions that correlate systematically with those shared feelings. I use “hot” for feeling hot and “cold” for feeling cold, because we have both felt those feelings and we share the convention on what we jointly agree to call what.<\/p>\n
That external corrective constraint gets us out of another kind of incorrigibility: Wittgenstein pointed out in his “private-language argument” that there could not be a purely private language<\/a> because then there could be no error-correction, hence there would be no way for me to know whether (i) I was indeed using the same word systematically to refer to the same feeling on every occasion or (ii) it merely felt as if I was doing so, whereas I was actually using the words arbitrarily, and my memories were simply deceiving me.<\/p>\n
So feelings are clearly deceiving if we are trying to “name” them systematically all on our own. But the only thing that social conventions can correct is the sensorimotor grounding<\/i> of those names: What we call (and do with) what, when. I can’t know for sure what you are feeling, but if you described yourself as feeling “hot” when the temperature had gone down, and as feeling “happy” when you had just received some bad news, I would suspect something was amiss.<\/p>\n
Those are clearly just correlations, however. Words are not analogs of feelings, they are just arbitrary labels for them. And although a verbal description of a picture can describe the picture as minutely as we like, it is still not an analog of the picture, just a symbolic description that can be given a systematic and coherent interpretation, both in words and actions (if it is TT-grounded).<\/p>\n
Yet we all know it can’t be symbolic descriptions all the way down: Some of our words<\/a> have to have been learned from (grounded in) direct sensorimotor (i.e., robotic) experience. How\/why did that experience have to be felt experience?<\/i> That’s the question we can’t answer; the explanatory gap. And a lemma to that unanswered question is: How\/why did that felt experience have to resemble<\/i> what is was about — as opposed to merely feeling like it resembles<\/i> what it is about? Why isn’t grounding just “functing” (e.g., the cerebral substrate that enables us to do and say whatever needs to be done and said in order to survive, succeed and reproduce, TT-scale)? And why is there anything more to meaning than just that?<\/p>\n