Harnad, S. (1993) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Pp. 53-56.
Drew McDermott
Computer Science Department
Yale University
New Haven, CT 06520 USA
mcdermott@cs.yale.edu
Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any solution to the semantic problem that works for them will work for most other computational systems.
Harnad defines a computional system as manipulating symbols that are "systematically interpretable" (Sect 1.4). It is easy to assume that this is a straightforward, easily satisfied requirement. In fact, a little contemplation will show that very few computational systems meet it. We tend to think otherwise because we think of digital computers as the paradigmatic computational systems, and they are (by definition) formal systems. A physical digital computer is homomorphic to a term-rewriting system whose symbols can be taken to refer to equivalence classes of physical states of the computer, defined at quantized times.} The state at time $n+1$ is determined (possibly stochastically) from the state and input at time $n$. Such a formal system is embodied in a physical system by ignoring the times in between time $n$ and time $n+1$, and, of course, by having the clock run fast enough that no one cares about the state in between.
I could be more precise about digital computation, but there is no point in it, for two reasons. First, the fact that the computer embodies a formal system whose symbols are systematically interpretable as denoting the states of the computer itself is completely independent of the question whether the data structures of the machine denote entities outside the computer. The original interpretation has no leverage on this second sort of interpretation, which need not be "systematic" (whatever that means at this point), and presumably depends on causal links between the entities referred to and the data structures that are doing the referring.
Second, it's completely clear that brains are not digital computers. They can be viewed from angles at which they sort of look kind of digital, but you have to squint. The original notion of systematic interpretability fuzzes away at this level of resolution. Nonetheless, computationalism is the most powerful tools for understanding brains. How can that be?
Let's start by rejecting the simplistic partition of computation into digital and analog. When Harnad says (sect. 1.2), "What I will say about symbolic computation does not apply to analog "computation" or to analog systems in general, whose activity is best described as obeying differential equations rather than implementing symbol manipulations," he implicitly appeals to this partition. But the brain could use lots of interesting techniques for computing that are not easily characterized as symbol manipulation or as differential-equation solving. For example, suppose the brain measures differences between arrival times of signals in the left ear and right ear by allowing potentials to start growing when a signal is received from one ear, then measuring the potential when the signal is received from the other. (I have no idea if this is silly or not.) The measurement has the effect of cutting off a smooth change, causing the current value to be transformed into some other medium, such as a rate of neuron firing. Is this digital? Nope. Is it analog? Only in the trivial sense that the whole system is describable by some differential equation. Of course, a flip-flop is characterized by a differential equation, too.
Another example: Consider a set of asynchronous digital computers, which pass messages among each other. Is this ensemble digital or analog? Clearly, it is neither, and in particular it is not digital because it doesn't have well-defined state transitions. (Cf. Sloman's review of Penrose in {\it Artificial Intelligence}, August 1992.)
The reason to make these observations is to point out that the computationalist counts a large number of types of device as computers. With current technology, digital computers are the most useful type, because they can simulate all the others, and not because digital computers are formal symbol-manpulating systems. Using a digital computer does not commit you to any position on symbols and their meanings; you can even simulate systems that have no symbols at all. In fact, figuring out exactly what the symbols of a system are, or whether it has any, is just as mysterious as and more basic than the question how symbols get their meanings. Implementing a system using a digital computer does not give an automatic answer to these questions; a computer is a chameleon, which takes on the philosophical problems raised by the system it simulates. It is a sterile activity to try to draw precise boundaries between digital, symbolic computation and other phenomena. Even spelling out what does not count as a computer is fairly pointless (although an interesting challenge).
So what does computationalism come down to? I think that it amounts to the claim that nothing important is left out when a mental system is modeled computationally. Suppose we wanted to test hypotheses about the operation of some neuronal system. Some hypotheses would require us to simulate the innards of the neurons' membranes. Others would require only that we mimic the system at some higher level, where, for instance, we could let a digital clock and counter take the place of a smoothly growing membrane potential. In either case, we could use a digital computer to do the modeling. The question is, what would be missing? Is it the case, as Searle has said, that all this simulation exhibits no more mentation than a simulation of a rainstorm exhibits moisture? The computationalist thesis is that nothing essential would be missing; the simulating system exhibits mentation to the same degree as the simulated one.
This thesis is, I assume, fairly uncontroversial if we restrict ourselves to talking about timing auditory signals. It becomes much more problematic when we bring up consciousness, subjectivity, and qualia. One computationalist approach to these problems is to propose that consciousness is the functioning of a certain sort of self-modeling capacity. It doesn't matter what medium is used to store the system's self-model. The structure of consciousness will be mostly unaffected, and there's no substance to consciousness separate from its structure.
I have spent most of my space arguing that Harnad is wrong about what computationalism is. I think he is also wrong about whether Searle has refuted computationalism, but too much ink has already been wasted arguing about Searle, and anyhow I agree that symbol grounding is important. What I disagree with is Harnad's notion that computationalism makes it obvious where the symbols are, but leaves them dangling high above the ground, and that some new kind of neural net is needed to tie them down. In practice, grounding the symbols is not hard; any kind of transducer will do, including neural nets, but also including teletypes connected to someone who can see. (Patrick Hayes, personal communication) What's hard is to define "symbol" and "meaning" in a way that is not relative to an observer. For most day-to-day purposes, to say that a given configuration of a system is a "symbol" that "means'' or "refers to'' something outside the system is to say that someone finds it convenient to draw the correspondence. A weather-prediction system refers to a storm because the meteorologists using it interpret its inputs and outputs to refer to a storm. Searle and other philosophers are tempted to escalate this to a distinction between "original" and "derived'' intentionality, so that meteorologists can outperform computers by referring purely by virtue of being in a certain intentional state. I would rather explain reference to storms, in the case of both computers and humans, by finding the right kind of causal link between storms and internal symbols. It's crucial that we eventually spell out some kind of observer-independent link, or there can never be a coherent account of how the universe came to include observers in the first place.
Harnad is engaged in a similar enterprise, but is led by some tortuous routes to avoid Searle's "Chinese Room" pseudoproblem. At some point, I lose the train of his argument. Suppose we agree that sensory transduction is important. How does that help us to ground symbols? Harnad says (Sect. 7.1): "A grounded system is one that has the robotic and the symbolic capacity to pass the TTT [Total Turing Test] in such a way that its symbols and symbolic activity cohere systematically with its robotic transactions with the objects, events and states of affairs that its symbols are interpretable as being about." Okay, but is that all there is to it? If symbols "cohere systematically" with the things they are "interpretable as being about," does that mean that they are about those things? I am tempted to say Yes; Searle, and many other philosophers, would say No, and I don't know how to close the gap. In any case, I don't see what neural nets contribute, except that they are respectable computational devices.
McDermott says transducers and neural nets are just two kinds of computational system. I agree about neural nets (room two, SIM), but I would be interested to know how McDermott would reconfigure his Sun to make it implement an optical transducer (as opposed to a virtual optical transducer). Connecting it to an optical transducer begs the question, of course, because that way I could "reconfigure" it into a furnace or an airplane too, just by connecting them. The reason you can't do it otherwise is because optical transduction, heating and flight are not implementation-independent formal properties. There's more than one way to "implement" them, to be sure, but none of the ways is computational (for they involve "reconfiguring" matter in general, not just a digital computer's states).
A flip-flop in a digital computer is indeed describable by a differential equation, as surely as any other analog system is (all implementational hardware is of course analog), but the computation it is performing is not. To know what that is you need to look at the level of what the flip-flop patterns are encoding. that's implementation independence.
McDermott suggests that I am holding "computers" and "computation" to distinctions that are either irrelevant or untenable. If this is meant to endorse ecumenism about computation, I would refer him to my response to Dietrich: If computation is allowed to become sufficiently broad, "X is/is-not computation" becomes vacuous (including "cognition is computation"). McDermott doesn't like my own candidate (interpretable symbols/manipulations) because sometimes you can't specify the symbols. Fine, let it be interpretable code then (is anyone interested in uninterpretable code?). Code that "refers" only to its own physical implementation seems circular. Causal connections between the code and computer-external things that it is interpretable as referring to, on the other hand, are unexceptionable (that's what my own TTT calls for), but surely that's too strong for all the virtual things a computer can do and be. (When you reconfigure a digital computer to simulate all others -- say, when you go from a virtual robot to a virtual planetary system -- are you reconfiguring the (relevant) "causal connections" too? But surely those are wider than just the computer itself; virtual causal connections to a virtual world are not causal connections at all -- see the Cheshire cat response to Dyer.)
One can agree (as I do) that nothing essential is missing in a simulated rainstorm, but the question is: Nothing essential to what? I would say: to predicting and explaining a rainstorm, but certainly not to watering a parched field. So let's get to the point. We're not interested in rainstorms but in brainstorms: Is anything essential missing in a simulated mind? Perhaps nothing essential to predicting and explaining a mind, but certainly something, in fact everything, essential to actually being or having a mind. Let's not just shrug this off as (something interpretable as) "self-modeling capacity." Perhaps the meanings of McDermott's thoughts are just something relative to an external observer, but I can assure you that mine aren't! .
-- S.H.