%A Stevan Harnad
%J Think
%T Grounding Symbols in the Analog World with Neural Nets
%X Harnad's main argument can be roughly summarised as follows: due to Searle's
Chinese Room argument, symbol systems by themselves are insufficient to
exhibit cognition, because the symbols are not grounded in the real world, hence
without meaning. However, a symbol system that is connected to the real world
through transducers receiving sensory data, with neural nets translating these
data into sensory categories, would not be subject to the Chinese Room
argument.
Harnad's article is not only the starting point for the present debate, but is also a
contribution to a longlasting discussion about such questions as: Can a computer
think? If yes, would this be solely by virtue of its program? Is the Turing Test
appropriate for deciding whether a computer thinks?
%N 1
%K neural nets, symbol grounding, connectionism, symbolism, computationalism, Searle's Chinese Room, Turing Test, robotics
%P 12-78
%E D.M.W. Powers
%E P.A. Flach
%V 2
%D 1993
%L cogprints1586