{"id":568,"date":"2018-12-31T20:15:58","date_gmt":"2018-12-31T20:15:58","guid":{"rendered":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/?p=568"},"modified":"2018-12-31T20:15:58","modified_gmt":"2018-12-31T20:15:58","slug":"unfelt-grounding-reply-to-john-campbell-2","status":"publish","type":"post","link":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2018\/12\/31\/unfelt-grounding-reply-to-john-campbell-2\/","title":{"rendered":"Unfelt Grounding: Reply to John Campbell-2"},"content":{"rendered":"

(Reply to John Campbell-2<\/a>)<\/p>\n

JC:<\/strong> “You can\u2019t address the symbol-grounding problem without looking at relations to <\/em>sensory awareness. Someone who uses, e.g., words for shapes and colors, but has never had experience of shapes or colors, doesn\u2019t know what they\u2019re talking about; it\u2019s just empty talk (even if they have perceptual systems remote from consciousness that allow them to use the words differentially in response to the presence of particular shapes of colors around them). Symbol-grounding shouldn\u2019t be discussed independently of phenomena of consciousness.<\/em>“<\/p><\/blockquote>\n

The symbol grounding problem<\/a> first reared its head in the context of John Searle’s Chinese Room Argument<\/a>. Searle showed that computation (formal symbol manipulation) alone is not enough to generate meaning, even at Turing-Test scale. He was saying things coherently in Chinese, but he did not understand, hence mean, anything he was saying. And the incontrovertible way he discerned that he was not understanding was not by noting that his words were not grounded in their referents, but by noting that he had no idea what he was saying — or even that he was saying anything. And he was able to make that judgment because he knew what it felt like to understand (or not understand) what he was saying.<\/p>\n

The natural solution was to scale up the Turing Test from verbal performance capacity alone to full robotic performance capacity. That would ground symbol use in the capacity for interacting with the things the symbols are about, Turing-indistinguishably from a real human being, for a lifetime. But it’s not clear whether that would give the words meaning, rather than just grounding.<\/p>\n

Now you may doubt that there could be a successful Turing robot at all (but then I think you would have to explain why you think not). Or, like me, you may doubt that there could be a successful Turing robot unless it really did feel (but then I think you would have to explain — as I cannot — why you think it would need to feel).<\/p>\n

If I may transcribe the above paragraph with some simplifications, I think I can bring out the fact that an explanation is still called for. But it must be noted that I am — and have been all along — using “feeling” synonymously with, and in place of “consciousness”:<\/p>\n

\n

*JC:<\/strong> You can\u2019t address the symbol-grounding problem without looking at relations to <\/em>feeling. A Turing robot that uses words for shapes and colors, but has never felt what it feels like to see shapes or colors, doesn\u2019t know what it’s talking about; it\u2019s just empty talk (even if it has unfelt sensorimotor and internal systems that allow it to speak and act indistinguishably from us). Symbol-grounding shouldn\u2019t be discussed independently of feeling.<\/em>“<\/p><\/blockquote>\n<\/blockquote>\n

I think you are simply assuming that feeling (consciousness) is a prerequisite for being able to do what we can do, whereas explaining how and why that is true is precisely the burden of the hard problem.<\/p>\n

You go on to write the following (but I will consistently use “feeling” for “consciousness” to make it clearer):<\/p>\n

JC:<\/strong> “Trying to leave out problems of [feeling] in connection with symbol-grounding, and then [to] bring [it] back in with the talk of \u2018feeling\u2019, makes for bafflement. If you stick a pin in me and I say \u2018That hurt\u2019 is the pain itself the feeling of meaning? The talk about \u2018feeling of meaning\u2019 here isn\u2019t particularly colloquial, but it hasn\u2019t been given a plain theoretical role either.<\/em>“<\/p><\/blockquote>\n

I leave feeling out of symbol grounding because I don’t think they are necessarily the same thing. (I doubt that there could be a grounded Turing robot that does not feel, but I cannot explain how or why.) <\/p>\n

It feels like one thing to be hurt, and it feels like another thing to say and mean “That hurt.” The latter may draw on the former to some extent, but (1) being hurt and (2) saying and meaning “That hurt” are different, and feel different. The only point is that (2) feels like something too: that’s what makes it meant<\/em> rather than just grounded.<\/p>\n

Harnad, S. (1990) The Symbol Grounding Problem<\/a> Physica D<\/em> 42: 335-346.<\/p>\n","protected":false},"excerpt":{"rendered":"

(Reply to John Campbell-2) JC: “You can\u2019t address the symbol-grounding problem without looking at relations to sensory awareness. Someone who uses, e.g., words for shapes and colors, but has never had experience of shapes or colors, doesn\u2019t know what they\u2019re talking about; it\u2019s just empty talk (even if they have perceptual systems remote from consciousness … <\/p>\n