Harnad, S. (1993) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Pp. 37-40.
Harnad accepts the picture of computation as formalism, so that any implementation of a program - thats any implementation - is as good as any other; in fact, in considering claims about the properties of computations, the nature of the implementing system - the interpreter - is invisible. Let me refer to this idea as 'Computationalism'. Almost all the criticism, claimed refutation by Searle's argument, and sharp contrasting of this idea with others, rests on the absoluteness of this separation between a computational system and its implementation.
But Computationalism taken this strictly is a caricature. For example, nobody thinks that whether or not a program might 'pass' the Turing test is completely independent of the hardware it might be running on, since speed might well be crucial to the system's success in conversation. (And in any case, the computer does have to be somehow attached to the keyboard or other interaction devices, and this is already a system matter, even if a rather routine and trivial one.) Actual computationalism is the idea of using computation as a metaphor for the mind. Part of the intellectual excitement of computationalism comes from the observation that higher-level functional organisation of working software is often largely independent of the detailed causal properties of the hardware it is running on. This seems worthy of note because it is a new kind of relationship between large-scale functional organisation and low-level mechanical detail, different than we have seen before and one with many surprising consequences. It suggests what a mind might be so that it could arise in a brain, and what a brain might need to be in order that a mind might be in it.
Real computationalism is a research direction, not a philosophical claim. Some of the early attempts to give philosophical accounts of it might perhaps be justly criticised, in retrospect, for having over-emphasised this independence-of-hardware theme. But we need to be careful here. The 'independence of hardware' thesis, even in the caricature form being criticised here, is the claim that the functional structure of the software can be implemented on any hardware you like. But there does have to actually be some hardware on which the software is implemented, and it does have to actually get implemented on it. The independence thesis is not the claim that one doesn't need the hardware at all. This requirement is quite nontrivial. It's not easy to actually make a computer which will run, as Turing knew well. (Talk of Turing machines and universal computability results here is misleading, since this entire body of computability theory is concerned with mathematical functions rather than physical mechanisms. That a Turing machine is a 'universal computer' does not mean that you could buy one and run any piece of software on it, even very slowly.)
Harnad is trying to make a bridge between the software and the hardware which is secure against what he perceives to be Searle's clever trick for getting into boxes where other minds can't possibly be, if Computationalism is right. That any implementation must be real is important here because any specification of how software is implemented on a real machine will provide just the kind of bridge that Harnad is wanting. (I am indebted to Brian Smith for making this clear to me.) There is no particular reason why the computer need not have sensors, arms, or whatever other robotic attachments are considered sufficient to nail down the meanings of its internal symbols. But these need not be made of neural stuff, nor need the system be built without software. Harnad argues for the utility of connectionist models on the grounds that, unlike computational models, they must be properly wired to their transducers: there isn't an intervening level of symbolic interpretation that would allow the symbols to float away in a cloud of formal meaninglessness. But this is a nonsequiteur. That a full account of meaning might require an account of grounding, and that this must somehow relate the structure of software to that of the machine's architecture, does not say anything about the nature of that architecture, or vice versa.
To see this more clearly, consider how a 'transducer' typically works, say a digital camera. One way to do this is to first convert the light to electrical charge, then let the charge leak away at a predetermined rate, using a clock to count how many ticks it takes to do so. That this results in an integer representing the light intensity is a matter of physics. That other parts of the computer represent integers in the same way that the transducer does is a matter of machine construction. That a binary numeral denote an integer is a matter of implementation encoding. That the program has symbols which correctly refer to light intensities is therefore a matter of how the software is implemented on this particular machine. Now, this is "arbitrary" in the sense that we could have done it some other way and still had everything work pretty much as before (the roboticists might buy a new camera which, unbeknownst to them, works on entirely different physical principles), but that is not an argument that this particular machine is somehow disconnected from its world because it uses computation, in a way that would be less true if it were made of neurons. Its beliefs about light intensity, even those encoded in software, are quite firmly grounded. And they are so grounded simply by the machine being an implementation of the program. There is nothing more mysterious (or less arbitrary) than the requirement that the hardware perform arithmetic correctly on binary encodings of digits: but that's just part of what it means to be an implementation.
So: even if we grant that Harnad is right that a full account of how internal symbols can be attached to the world needs something more like the TTT than the TT, there is no reason why this must force us to abandon the insight that mentality might consist largely in the computational manipulation of symbols, or why a touch of computation in the night should somehow divest these symbols of meaning. But I think there is a deeper question here, which is the extent to which internal symbols might get their attachement to the world through language. Perhaps Turing's insight was deeper than we now give him credit for, and he saw that much of our conceptual framework is more connected through the world through language than through the senses. In Harnad's terminology, why should there not be transducers to language as well as to the physical sensory inputs? To pass what might be called the P(hysical)TT requires us to build a robot monkey, but the difference between that and the TT might still be a more significant step to the TTT than all this sensorimotor embedding.
By the way, I would argue that Searle's trick doesn't work, even if it mattered that it did, for all it claims to show is that (if Computationalism is correct) the hardware running the program has no mentality: IT doesnt understand Chinese. Searle argues essentially that the CPU chip in the computer running the Chinese-understanding program doesn't understand Chinese (not that "... we could ourselves become implementations of the very same symbol system that had passed the Chinese TT ".) That hardly seems surprising. There is an implicit claim that only the hardware is really there (one which is sometimes conveyed by emphasising that one rejects Dualism, or using such phrases as 'ghostly computational executives'). But this begs exactly the question we are wrestling with. Searle's argument can't persuade me that software isn't real, since it assumes this. In more recent works Searle has become quite explicit on this; he thinks in fact that to talk of software is incoherent.
However, in spite of Searle's authority, it seems to be simply a fact that software does exist. But it is, indeed, very peculiar stuff. For example, is software to be thought of as machinery to be patented, or as text to be copyrighted? Both seem appropriate in some ways but dramatically not others, and the legal system is confused on the matter. Creating software feels like engineering, but no other engines can be sent along telephone wires. At some level, all software consists of symbols which are being 'interpreted' by a physical, often electrical, machine. This is 'machine code'. But one needs to emphasise just what a very low level this often is, sometimes within the operation of a silicon chip itself; and that the relationship between these symbols and this machine is not in the least like that between some instructions and a human interpreter of them, but is more like that between patterns of holes in a cards and the shifting levers of a mechanical loom. (This contrast is why I believe that the "a human implementation does not count as a real implementation", or, better, that John Searle simulating a computer is not actually a computer.) Notice also that these 'symbols' are not formal in the sense used in these arguments, but have quite determinate, fixed meanings as specifications of state change of the hardware.
Someone who doubts the reality of software will no doubt find be ironically amused here, since I may seem to be arguing that computers are even less plausible candidates for cognitive talents My occupant of the electronic Chinese room can't understand anything, never mind Chinese: all it is is a bundle of circuits which twitch rapidly in response to a few hundred voltages. But of course that's the central processor, not the entire computer, software and all, and still less the program itself. Searle has taken the entire complexity of a computational system and divided it into the CPU - casting himself in that role - and everything else - which has become a few rules. This may have been a debating trick or ignorance, but in any case Harnad should know better than to follow him. As he says, "if such mere hand-waving were all that the original Chinese Room Argument had been based on, then that argument would have been wrong too, and the "System Reply" -- to the effect that Searle is just part of a system, and that it is the system as a whole, not Searle, that would understand Chinese -- the reply favored by most of Searle's critics, would have been correct." It was, three times.
Hayes raises a number of interesting points. He continues to argue (see Hayes et al. 1992) that Searle is not an implementation of the TT-passing program, even though (1) he uses and steps through exactly the same code (and, remember, it matters not a bit whether Searle does this at a higher software level or at the rock-bottom machine code level -- see the "acrostic" reply to MacLennan) and (2) his performance is TT-indistinguishable from the computer's (till doomsday, in principle). Maybe there is something about the magic of real implementation (through mindless, mechanical-loom-style pattern matching) such that it is capable of generating a ghost in the machine only when there is not already a ghost in residence, performing the pattern matching! To me, this sounds like bad news for implementation-independence -- and also like a lot of mentalistic special pleading about what counts as an implementation when that should all have been settled in advance, mind-independently, before computation (which, on the face of it, has nothing to do with mentation) ever became a challenger in the mental arena.
I agree that "interpretation," in the sense of rule-governed physical pattern matching (as in a mechanical loom or digital computer) is not the same as the conscious interpretation of syntactic symbol manipulation rules by a person. But it's the execution of the manipulations that we are equating here, not the "interpretation" in either of these senses. Indeed, the sense of "interpretation" that we are actually aiming for is yet a third one: the sense in which thoughts are meaningful (and ungrounded symbols, undergoing manipulation, no matter by whom or what, are not).
Never mind. Let us concede that if Hayes can ever give a nonarbitrary criterion for what does and does not count as an implementation of the same software among otherwise turing indistinguishable, turing-equivalent and even strongly equivalent "implementations" ("virtual" implementations, shall we call these illicit ones?) of the same symbol system, then the Chinese Room Argument will have to be reconsidered (but probably so will a lot of the computationalism and functionalism that currently depends on the older, looser criterion).
I do have to point out, though, that there is a difference between a computer being connected to peripheral transducers (cameras, say), and the computer's being those transducers (which it is not: a computer certainly consists of transducers too, but not the transducers that would be a robot's sensorimotor surfaces; those are the kinds of transducers I am talking about). This is not just a terminological point. My own grounding hypothesis is that, to a great extent, we are (sensorimotor) transducers (and their analog extensions); our mental states are the activity of sensorimotor transducers (which are part of an overall TTT-capable system). Their activity is an essential component of thinking states. No transducer activity: no thinking state. There is no way to "reconfigure" an all-purpose computer, one that can implement just about any program you like, into a sensorimotor transducer -- except by adding a sensorimotor transducer to it. That, I take it, is bad news for the hypothesis that thinking is just computation (if my transduction hypothesis is right).
Because I'm interested in mind-modelling and not just in machine virtuosity, I have singled out TTT-scale grounding as the empirical goal. One can speak of a digital camera as "grounded" in a trivial sense: the internal computational states in such a "dedicated" computer are indeed "bound" to certain external energy configurations falling on its transducer surface, and not just as a matter of our interpretations. But such trivial grounding does not justify talking about the camera's having "beliefs! Only the TTT has the power to match the complexity and narrow the degrees of freedom for the interpretation of its internal states to something that is commensurate with our own (and I agree with Hayes that the expressive power of natural language, a subset of the TTT, may well loom large in such a system). Otherwise we are indeed talking metaphor (or hermeneutics) rather than reality.
-- S.H.