Harnad, S. (1993) Grounding Symbols in the Analog World with Neural Nets. Think 2: 12 - 78 (Special Issue on "Connectionism versus Symbolism" D.M.W. Powers & P.A. Flach, eds.). Pp. 41-43.


A Note on the Symbol Grounding Problem and its Solution

Vasant Honavar
Department of Computer Science
Iowa State University
Ames, Iowa 50011. U.S.A.
honavar@iastate.edu

Hilbert's quest for a purely syntactic framework for all of mathematics is what led to what we now call formal systems or symbol systems. The meaningless statements of a formal system are finite sequences of abstract symbols. A finite number of such statements are taken as axioms of the system. A finite number of transformation rules that specify how a string of symbols can be converted into another such string. What do these symbols have to do with anything? The answer is: they don't --- unless there is some way to interpret such symbols. But how? No such interpretation can be complete if it is performed by another formal (and hence equally meaningless) system that translates the symbols into other symbols using a codebook or a dictionary because even the simplest (atomic) symbols are necessarily meaningless (unless an external observer reads meaning into such symbols for his/her own purposes). As Harnad points out, it is this essential meaninglessness of symbols in a symbol system that Searle exploited in his critique of strong AI in the form of the Chinese room argument.

Harnad's proposal is to imbue the symbols in a symbol system with meaning by ensuring that they are physically grounded in the environment. I will call this proposal Harnad's symbol-grounding thesis (HSG thesis). In particular, Harnad proposes a hybrid model in which a certain class of neural networks that learn to categorize analog sensory projections into symbols (category names) serves to establish bottom-up grounding of the symbols in a symbol system. Presumably similar processes would be at work at the motor interface.

Let us examine the HSG thesis a little bit more closely. HSG thesis is relevant if we concede that a symbol system is a necessary component of a cognitive architecture. In that case, the critical task is to make the otherwise meaningless symbols in a symbol system meaningful (to the system, not just some external observer doing the interpretation) via physical grounding of symbols by employing processes that (causally) associate the energy states in the physical world with symbols of a symbol system (through transducers), and conversely, symbols into energy states (through effectors).

Analog sensory projections seem to be a critical component of Harnad's solution to the symbol grounding problem. But the trouble is --- analog process, like computation, is a vague term. If analog means continuous (as opposed to discrete), the centrality of analog sensory projections appears questionable. Physicists have found no resolution of the controversy as concerning whether the physical world is truly analog or discrete. Insisting on analog sensory projections is tantamount to suggesting that the physical world is fundamentally analog --- a proposition that is of questionable validity if we were to accept wave-particle duality. For symbol-grounding, the critical distinction is the one between a formal symbol system (which in its ungrounded form has no causal powers) and a physical system (with causal powers) --- not the distinction between continuous and discrete processes. (Of course, this does not mean that analog processes embodied in physical systems do not subserve important --- perhaps even necessary role in intelligent behaviour.)

What appears to be essential for symbol grounding is energy transfer across the interface between the system and its environment because it is such energy transfer that causes activation of symbol structures in response to transduced environmental states and changes in environmental states with the mediation of effectors activated by the states of the symbol system. (This discussion is equally applicable to the internal physical environment of the system (the physico-chemical basis of pain, pleasure etc.), which can provide grounding for symbols just like the external environment.) Meaning of symbol structures is a consequence of their role in the causal loop which connects the system with its environment.

Harnad's proposal also implies that the shapes of symbols in a grounded symbol system are not arbitrary --- i.e., because of the intrinsic meaning that a grounded symbol embodies (by virtue of its grounding) the system cannot interpret the symbols in an arbitrary fashion. In other words, the shape of a symbol might be a consequence of the category of sensory inputs that it has come to represent (via grounding). However, it is far from clear that learning is essential for symbol grounding (this does not mean that learning is not important for a variety of other reasons). It is easy to imagine (as Harnad concedes) systems in which the necessary grounding of symbols is hard-wired at birth. In this case, one might argue that the grounding of symbols was discovered by evolutionary processes. But then it is not clear whether such symbols can be attributed the same sort of meaning as the symbols that are grounded in learned categories. Extrapolation of this line of thought leads to some intriguing questions concerning the locus of semantics --- is it the organism (system)? the species? the gene? the environment? the cosmos?

Harnad's arguments for the need for grounding of symbols also raises additional questions about the working hypothesis of strong AI --- Cognition is Computation. If indeed by computation we mean the formal notion of computation put forth by Turing, and we take cognition to mean something beyond merely formal symbol manipulation, it questions the adequacy of our current notions of computation for realizing intelligent systems.

HARNAD RESPONSE TO HONAVAR:

Honavar says little that I can disagree with. For me, analog structures and processes are those that are best described as obeying differential equations rather than as implementations of implementation-independent symbol manipulations (or, as Maclennan puts it, symbolic difference equations). The difference between a real planetary system and a computer simulated planetary system captures the distinction quite nicely. It seems to me that the final chapter of quantum mechanics (concerning the ultimate continuity or discreteness of the physical world) has nothing to do with this dichotomy, no matter what it turns out to be.

Whether symbols are grounded by learning or evolution does not much matter to my theory; I happen to focus on learned categories, but the raw input we begin with is clearly already filtered and channeled considerably by evolution. It would be incorrect (and homuncular), however, to speak of a grounded system's "interpreting" the shapes of its symbols. If the symbols are grounded, then they are connected to and about what they are about independently of any interpretations we (outsiders) project on them, in virtue of the system's TTT interactions and capacity. But (as Searle points out in his commentary, and I of course agree), there may still be nobody home in the system, no mind, hence no meaning, in which case they would still not really be "about" anything at all, just, at best, TTT-connected to certain objects, events and states of affairs in the world. Grounding does not equal meaning, any more than TTT-capacity guarantees mind. And there is always the further possibility that symbol grounding is a red herring, because symbol systems are a red herring, and not much of whatever really underlies mentation is computational at all. The TTT would still survive if this were the case, but "grounding" would just reduce to robotic "embeddedness" and "situatedness."
-- S.H.