Letter to TLS, Nov 4 2011:
“Stevan Harnad misstates the criteria for the Turing Test when he describes a sensing robot that could pass the test by recognizing and interacting with people and objects in the same way that a human can (October 21). Alan Turing’s formulation of the Turing Test specifies a computer with no sensors or robotic apparatus. Such a computer passes the test by successfully imitating a human in text-only conversation over a terminal.
“Significantly, and contrary to Harnad’s formulation, no referential “grounding” of symbols is required to pass the Turing TestDavid Auerbach 472 9th Street, New York 11215.
David Auerbach (TLS Letters, November 4) is quite right that in his original 1950 formulation, what Turing had called the “Imitation Game” (since dubbed the “Turing Test”) tested only verbal capacity, not robotic (sensory/motor) capacity: only symbols in and symbols out, as in today’s email exchanges. Turing’s idea was that if people were completely unable to tell a computer apart from a real, live pen-pal through verbal exchanges alone, the computer would really be thinking. Auerbach is also right that — in principle — if the verbal test could indeed be successfully passed through internal computation (symbol-manipulation) alone, then there may be no need to test with robotic interactions whether the computer’s symbols were “grounded” in the things in the world to which they referred. But 2012 is Alan Turing Year, the centenary of his birth. And 62 years since it was published, his original agenda for what is now called “cognitive science” has been evolving. Contrary to Turing’s predictions, we are still nowhere near passing his test and there are by now many reasons to believe that although being able to pass the verbal version might indeed be evidence enough that thinking is going on, robotic grounding will be needed in order to actually be able to pass the verbal test, even if the underlying robotic capacity is not tested directly. To believe otherwise is to imagine that it would be possible to talk coherently about the things in the world without ever being able to see, hear, touch, taste or smell any of them (or anything at all).
Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.
Harnad, S. (1990) The Symbol Grounding Problem Physica D 42: 335-346. http://cogprints.org/0615/
Harnad, S. (1992) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4): 9-10.
Harnad, S. (1994) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial Life 1(3): 293-301.
Harnad, S. (2000) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language, and Information 9(4): 425-445. (special issue on “Alan Turing and Artificial Intelligence”)
Harnad, S. (2001) Minds, Machines and Searle II: What’s Wrong and Right About Searle’s Chinese Room Argument? In: M. Bishop & J. Preston (eds.) Essays on Searle’s Chinese Room Argument. Oxford University Press.
Harnad, S. (2002) Darwin, Skinner, Turing and the Mind. (Inaugural Address. Hungarian Academy of Science.) Magyar Pszichologiai Szemle LVII (4) 521-528.
Harnad, S. (2002) Turing Indistinguishability and the Blind Watchmaker. In: J. Fetzer (ed.) Evolving Consciousness. Amsterdam: John Benjamins. Pp. 3-18.
Harnad, S. and Scherzer, P. (2008) First, Scale Up to the Robotic Turing Test, Then Worry About Feeling. Artificial Intelligence in Medicine 44(2): 83-89
Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer
Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3.