{"id":606,"date":"2018-12-31T21:48:08","date_gmt":"2018-12-31T21:48:08","guid":{"rendered":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/?p=606"},"modified":"2018-12-31T21:48:08","modified_gmt":"2018-12-31T21:48:08","slug":"symbols-and-sense","status":"publish","type":"post","link":"http:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2018\/12\/31\/symbols-and-sense\/","title":{"rendered":"Symbols and Sense"},"content":{"rendered":"

Letter to TLS, Nov 4 2011:<\/b><\/p>\n

“Stevan Harnad misstates the criteria for the Turing Test when he describes a sensing robot that could pass the test by recognizing and interacting with people and objects in the same way that a human can (October 21<\/a>). Alan Turing\u2019s formulation of the Turing Test specifies a computer with no sensors or robotic apparatus. Such a computer passes the test by successfully imitating a human in text-only conversation over a terminal.
\n    “Significantly, and contrary to Harnad\u2019s formulation, no referential \u201cgrounding\u201d of symbols is required to pass the Turing Test<\/p><\/blockquote>\n

                         David Auerbach<\/a><\/b> 472 9th Street, New York 11215<\/i>.<\/p><\/blockquote>\n

<\/a>David Auerbach (TLS Letters, November 4<\/a>) is quite right that in his original 1950<\/a> formulation, what Turing had called the “Imitation Game” (since dubbed the “Turing Test”) tested only verbal capacity, not robotic (sensory\/motor) capacity: only symbols in and symbols out, as in today’s email exchanges. Turing’s idea was that if people were completely unable to tell a computer apart from a real, live pen-pal through verbal exchanges alone, the computer would really be thinking. Auerbach is also right that — in principle — if the verbal test could indeed be successfully passed through internal computation (symbol-manipulation) alone, then there may be no need to test with robotic interactions whether the computer’s symbols were “grounded” in the things in the world to which they referred. But 2012 is Alan Turing Year<\/a>, the centenary of his birth. And 62 years since it was published, his original agenda for what is now called “cognitive science” has been evolving. Contrary to Turing’s predictions, we are still nowhere near passing his test and there are by now many reasons to believe that although being able to pass the verbal version might indeed be evidence enough that thinking is going on, robotic grounding will be needed in order to actually be able to pass the verbal test, even if the underlying robotic capacity is not tested directly. To believe otherwise is to imagine that it would be possible to talk coherently about the things in the world without ever being able to see, hear, touch, taste or smell any of them (or anything at all).<\/p>\n

Harnad, S. (1989) Minds, Machines and Searle<\/a>. Journal of Theoretical and Experimental Artificial Intelligence<\/i> 1: 5-25. <\/p>\n

Harnad, S. (1990) The Symbol Grounding Problem Physica D 42: 335-346. http:\/\/cogprints.org\/0615\/<\/p>\n

Harnad, S. (1992) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion<\/a>. SIGART Bulletin<\/i> 3(4): 9-10. <\/p>\n

Harnad, S. (1994) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life<\/a>. Artificial Life 1(3): 293-301. <\/p>\n

Harnad, S. (2000) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables<\/a>. Journal of Logic, Language, and Information<\/i> 9(4): 425-445. (special issue on “Alan Turing and Artificial Intelligence”) <\/p>\n

Harnad, S. (2001) Minds, Machines and Searle II: What’s Wrong and Right About Searle’s Chinese Room Argument?<\/a> In: M. Bishop & J. Preston (eds.) Essays on Searle’s Chinese Room Argument<\/i>. Oxford University Press. <\/p>\n

Harnad, S. (2002) Darwin, Skinner, Turing and the Mind<\/a>. (Inaugural Address. Hungarian Academy of Science.) Magyar Pszichologiai Szemle<\/i> LVII (4) 521-528. <\/p>\n

Harnad, S. (2002) Turing Indistinguishability and the Blind Watchmaker<\/a>. In: J. Fetzer (ed.) Evolving Consciousness<\/i>. Amsterdam: John Benjamins. Pp. 3-18. <\/p>\n

Harnad, S. and Scherzer, P. (2008) First, Scale Up to the Robotic Turing Test, Then Worry About Feeling<\/a>. Artificial Intelligence in Medicine<\/i> 44(2): 83-89 <\/p>\n

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence<\/a>. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer<\/i>. Springer <\/p>\n

Harnad, S. (2011) Minds, Brains and Turing<\/a>. Consciousness Online<\/i> 3. <\/p>\n","protected":false},"excerpt":{"rendered":"

Letter to TLS, Nov 4 2011: “Stevan Harnad misstates the criteria for the Turing Test when he describes a sensing robot that could pass the test by recognizing and interacting with people and objects in the same way that a human can (October 21). Alan Turing\u2019s formulation of the Turing Test specifies a computer with … <\/p>\n