Harnad, S. (1993) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Pp. 27-30.
For many years now, Harnad has argued that transduction is special among cognitive capacities -- special enough to block Searle's Chinese Room Argument. His arguments (as well as Searle's) have been important and useful, but not correct, it seems to me. Their arguments have provided the modern impetus for getting clear about computationalism and the nature of computing. This task has proven to be quite difficult. Which is simply to say that dealing with Harnad's arguments (as well as Searle's) has been difficult. Turing, it turns out, only got us started. But Harnad's (and Searle's) arguments ultimately fail. Turing, it turns out, was on the right track.
To begin, I will state for the record what computationalism is (or ought to be). It is the hypothesis that all cognition, all thinking, is the execution of some partial recursive functions. Our jobs as psychologists, AI researchers, cognitive scientists, cognitive ethologists, neuroscientists, etc. is to discover which functions from this vast set of functions best describe the various cognitive capacities we observe. (I don't for a minute think that we should approach our task explicitly thinking of functions; we shouldn't try to be mathematicians, necessarily.)
Computationalism has to be true because every process is a computation: computation is ubiquitous. At least that's the hypothesis. Here's my argument (this argument is derived from Fields, 1989). It's impossible to describe any process (transduction included) to an arbitrary degree of precision. Past a certain degree, noise will render all future descriptions meaningless. This point is basically Heisenberg's Principle, and hence can be put another way: all measurements of any process are ultimately nonclassical. Measurement is nonclassical when it affects the internal states or dynamics of the system being measured. The claim is that all measurement becomes nonclassical for every physical system if one attempts to specify, or measure, states to an arbitrary degree of precision. Fields puts it this way: "In the limit of an infinitely precise measurement of [a] state, all information about the next state ... is lost" (p. 176). There is, therefore, an upper bound on the precision of the measurements we can make and hence on the descriptions we can deploy for a given system or process. In short, the number of states we can describe is at most countably infinite. Hence, for any process or system, there will always be some virtual machine that completely describes that process's behavior. We can, in theory, implement that virtual machine on a computer (but not always in practice) because of a) Church's thesis, and b) Turing's proof of the universality of the Turing machine. QED.
(Please note: computationalism is a hypothesis about cognition. It is not a metaphysical thesis about the nature of the world. The argument in the preceding paragraph derived from Fields is a metaphysical thesis about the world (actually, Fields thinks it is a physical thesis). Hence, Fields's argument is not an argument for computationalism, strictly speaking. The argument for computationalism is a straightforward instance of universal instantiation, based on the assumption that Fields's argument is correct.)
Now, many have regarded the ubiquity of computation as a testament to its vacuity, Harnad and Searle among them. They say that if everything is a computation, then the thesis is vacuous, and also that computationalism is vacuous. But neither of these claims is vacuous. The first claim is a general, universal hypothesis made in the time-honored tradition of science everywhere. The claims that everything is made of atoms, or that all objects tend to continue their motion unless otherwise disturbed, or that all species evolved are well-known scientific hypotheses. They are not vacuous at all, but are instead profound and deep claims about the world. Furthermore, the inference from, e.g., "everything is made of atoms" to "this keyboard is made of atoms" is a case of universal instantiation and constitutes a test of the hypothesis: if it should turn out that my keyboard is not made of atoms, then the hypothesis is false, and our physics is in deep trouble.
So it is with computationalism and Fields's thesis. The claim that everything is a computation is not vacuous. And the inference from it to the claim that thinking is computation is likewise not vacuous. Finally, the inference to my thinking is computing is a test, because if it should turn out that my thinking is not computing then both computationalism and Fields's thesis are false. Testing for individual counterexamples is probably the only way we have of proceeding, by the way.
One can see that the claim that everything is a computation is not vacuous by noticing what it would mean if it were true and what it would take to make it false. With respect to computationalism, this last point is very important. Many researchers misunderstand (or refuse to accept) the empirical nature of computationalism. Computationalism would be false if thinking (e.g., my thinking) involves executing functions that are equivalent to the halting problem. And we have every reason to believe we could demonstrate this if it were true. More importantly, it seems clear, I think, that computationalism's status as a viable hypothesis would be seriously jeopardized if it should turn out that humans routinely compute functions that are equivalent to arbitrarily large instances of the traveling salesman problem. Strictly speaking, such a scenario is compatible with computationalism, but assuming P does not equal NP, such a scenario means that we seriously misapprehend computational architectures. In such a case, I for one would be prepared to re-evaluate my commitment to computationalism.
In sum, computationalism is a non-trivial, falsifiable, thesis about cognition, whose truth follows from a profound fact about measurement and the physical world, and for which there is ample evidence.
Where does this leave Harnad's arguments and his hybrid project? I'll take these in turn. All of Harnad's arguments against computationalism and for what he calls the Total Turing Test (TTT) seem to me to be question begging or just bald assertions. For example, defining computation as the "manipulation of physical `symbol tokens' on the basis of syntactic rules that operate only on the shapes of the symbols..." (1.2) is clearly question-begging. I, for one, have no idea what "syntactic rules operating only on shapes" even means with respect to real computation, and such a phrase clearly packs into the notion of computation the very aspect that allows Harnad (and Searle) to reject it. And anyway, it is by now almost a commonplace that computation is fully semantical (e.g., see Dietrich, 1990, 1989; Dyer, 1990; and Smith 1988, in press).
Another question-begging feature of Harnad's argument is his reliance on the distinction between simulation and the real thing: if computationalism is true, this distinction is vacuous for cognition. And nowhere does he give us a compelling argument that computers merely simulate thinking. He just asserts that they do.
Furthermore, Harnad gives us no reason to believe either that "certain forms of performance may be unattainable from `ungrounded' symbol systems" (1.5), or that parallelism is essential to realizing certain nontrivial cognitive abilities (see PAR, 2.1.3). And for this last claim, there is considerable evidence that it is false. In their paper "Multi-layer feedforward networks are universal approximators" (1989), Hornik, Stinchcombe, and White give a slow, rigorous proof that standard feedforward networks with as little as one hidden layer are capable of approximating any (Borel) measurable function, i.e., that they are as strong as Turing machines. And Selmer Bringsjord (1991) has offered the following quick, but convincing argument that they are equivalent: neural nets are computationally equivalent to cellular automata, which in turn are computationally equivalent to K-tape Turing machines, from which it follows that the Church-Turing thesis remains true even for neural nets; hence, no claims about different strengths for different types of machines need be entertained.
Finally, what about Harnad's hybrid project? I think it sounds interesting and even worthy of funding. His hybrid system isn't required for the reasons he thinks it is, i.e., his system isn't required because a more standard system would be incapable of thinking. But his hybrid might be methodologically necessary because it is possible that only with such a system might we humans figure out how to implement intelligent thought. In any case, I think there is room for everyone. Figuring out how the brain produces a mind is a problem that dwarfs even such chestnuts as the search for a unified field theory or predicting the weather. With such an immense task on our hands, it won't do to lock any researchers out, especially those with some positive project they're pursuing, like Harnad.
Bringsjord, S. (1991). Is the Connectionist-Logicist Clash One of AI's Wonderful Red-herrings? Jour. of Experimental and Theoretical AI, vol. 3, n. 4: 319-349.
Dietrich, E. (1990). Computationalism. Social Epistemology 4: 135-154.
Dietrich, E. (1989). Semantics and the Computational Paradigm in Cognitive Psychology. Synthese 79: 119-141.
Dyer, M. G. (1990). Intentionality and Computationalism: Minds, Machines, Searle, and Harnad. Jour. of Experimental and Theoretical AI, vol. 2, n. 4: 303-319.
Fields, C. (1989). Consequences of nonclassical measurement for the algorithmic description of continuous dynamical systems. Jour. of Experimental and Theoretical AI, vol. 1, n. 3: 171-178.
Hornik, K., Stinchcombe, M., and White, H. (1989). Multi-layer Feedforward Networks are Universal Approximators. Neuronetworks 2.
Smith, B. C. (1988) The Semantics of Clocks. In J. Fetzer (ed.) Aspects of Artificial Intelligence, Dordrecht, Holland: Kluwer, pp. 3-31.
Smith, B. C. (in press). Is Computational Formal? Cambridge: MIT/Bradford.
If Dietrich endorses my empirical approach, is there anything left to quarrel about? Or has he too changed the subject? I define computation as syntactic symbol manipulation; Dietrich sees it as the execution of recursive functions. As far as I know, these two are equivalent. Dietrich says every process is a computation, but that this is not vacuous. If he means every process is describale by computation (as it is describable by English sentences -- another form of computation), I think this is true (I do subscribe to the Church/Turing Thesis), but irrelevant. For the purpose of computationalism (or so I thought) was to distinguish cognitive systems from, say, planetary systems, fluids, and electrons (to pick some basic physical systems), or from furnaces, cars and planes (to pick some engineering systems). If all of these are computational systems then so is the brain, of course, and there seems to be nothing worth disagreeing about -- but we're no closer to knowing what's special about cognitive systems.
But is every process computation? Is there no longer a difference between a furnace and a virtual furnace? I think I can still espy one: The furnace really gets hot, whereas the virtual furnace is just a bunch of symbols that are systematically interpretable as if something was getting hot. "Getting hot," in other words, is not a computational process but a physical process; in particular, unlike computation, it is not implementation-independent. So there's hope for computationalism. For if it is not true that every process is just computation, but only that every process is simulable by computation, then we can still ask whether cognition is something computation is rather than just something computation can simulate or describe; in other words, whether cognition is just implementation-independent syntactic symbol manipulation or something more.
And it turns out that you don't have to turn to the halting problem to falsify this nonvacuous hypothesis: the Chinese Room Argument suffices; or, if that's not enough to persuade you, consider the kinds of obstacles the symbol grounding problem raises for the hypothesis that thinking is just symbol manipulation (Harnad 1990b). I can't see Dietrich's difficulty with understanding what is meant by "syntactic rules operating only on [symbol] shapes" [which, like the shape of the physical states in the simulated furnace, are arbitrary in relation to what the symbols can be systematically interpreted as meaning -- see Boyle on pattern matching], but I would be especially interested to hear more about the "commonplace" that computation is "fully semantical": Does this mean that a simulated furnace is not just squiggles and squoggles that are systematically interpretable by us as if they were getting hot? Do they somehow contain that interpretation on their own, in the way that the sentences in a book, say, do not?
I think Dietrich may be selling semantics short here: For me, at any rate, there's only one kind of meaning, and that's the kind that occurs in the heads of conscious systems. When I say some system or part of a system means "the cat is on the mat," I really mean the system consciously means it, the way I do when I think it. Cognition, after all, is thinking, isn't it? And thinking is conscious thinking (or at least going on in the head of a conscious thinker). So, in order not to beg the question about whether or not computation is indeed "fully semantical," it will have to be shown that the computational system consciously means what it is (otherwise only) systematically interpretable by us as meaning.
Let me hasten to add that this is a much taller order than I undertake in trying to ground symbols in TTT capacity: Just as immunity to Searle's argument cannot guarantee mentality, so groundedness cannot do so either. It only immunizes against the objection that the connection between the symbol and what it is about is only in the mind of the interpreter. A TTT-indistinguishable system could still fail to have a mind; there may still be no meaning in there. Unfortunately, however, that is an ontic state of affairs that is forever epistemically inaccessible to us: We cannot be any the wiser.
(Dietrich misunderstands my points about possible performance limitations of ungrounded symbol systems in general [apart from their cognitive ambitions]; that was just a conjecture, on which nothing else I say depends; and far from wanting to give reasons why parallelism is essential for anything cognitive, I chose transduction, not parallelism, as my motivated candidate for this essential role.)
-- S.H.