Peirce and Formalization of Thought: the Chinese Room Argument
Abstract
Whether human thinking can be formalized and whether machines can think in a human sense are questions that have been addressed by both Peirce and Searle. Peirce came to roughly the same conclusion as Searle, that the digital computer would not be able to perform human thinking or possess human understanding. However, his rationale and Searle's differ on several important points. Searle approaches the problem from the standpoint of traditional analytic philosophy, where the strict separation of syntax and semantics renders understanding impossible for a purely syntactical device. Peirce disagreed with that analysis, but argued that the computer would only be able to achieve algorithmic thinking, which he considered the simplest type. Although their approaches were radically dissimilar, their conclusions were not. I will compare and analyze the arguments of both Peirce and Searle on this issue, and outline some implications of their conclusions for the field of Artificial Intelligence.
Background
For Peirce, the elements of thought he termed "cognitions" were, roughly speaking, the operational building blocks of mind. Thus, in the essay Questions Concerning Certain Faculties Claimed for Man, Peirce (1992c) supports the thesis that cognitions, which he uses to refer to the contents (and, eventually, the objects) of thinking, are allied to sensation, and that there is no independent faculty of intuition. This latter faculty would involve the "determination" (p. 12) of a cognition "directly by the transcendental object" (p. 12). That is, Peirce would like to show that the faculty of intuition, whereby we purportedly have direct and accurate knowledge of the world, is both unlikely and unnecessary for an explanation of thought or for our knowledge of the world. True and accurate knowledge is not possible, if his arguments are correct, either in the sense that we know, without inference, that there is an objective reality or that we know, without inference, anything definite about this reality; and cognitions, as both the objects and contents of thought, are solely determined, through various processes, by other cognitions (which include sensations).
Over the next several pages of this essay, Peirce (pp. 12-15) advances arguments designed to show that our possession of the faculty of intuition is not decisively supported by evidence, and further, that if there were such a faculty, that cognition, the "work of the intellect" (p. 15), could perform its functions equally effectively without it. If that were true, then Occams Razor would entail the elimination of the hypothesis of intuition, since we already know that much of thought involves cognition without intuition. Thus, commenting on Peirce, Habermas (1995) states, "the epistemological critique is directed against an intuitionism that claims that our judgments are constructed from immediately given and absolutely certain ideas or sense-data" (p. 251). Later (largely in the essay Some Consequences of Four Incapacities, [1992d, pp. 28-55]), Peirce will argue that the interactions of these cognitions our mental operations, constituting the basis for our inferences about the world and about our thinking may be described in terms of one of several types of logical processes. It is this variety of logical processes, i.e., Peirces explication of types of thinking, that will play a large part in this essay.
Peirce (1992c, 1992d; 1991) offers detailed speculations and arguments (pp. 15-16) concerning the biology and physiology of thinking; these are based on some knowledge of neurophysiology and on speculation about the processes of the central nervous system (CNS). He argues that, starting with extremely complex combinations of very simple sensations, no more than the "excitation of a nerve" (p. 16), and continuing through various processes of inference, the mind creates conceptions of "mediate simplicity" (p. 16) which reduce the "inconceivably complicated" sets of sensations to simpler, abstract, concepts. These concepts are the bases and the components of our understanding of the world. To summarize one of his examples (p. 16), the motion of an image over the retina produces, through such inferential processes, the conception of space.
At this point, Peirce seems to have concluded that mind can result from formalizable operations on arbitrary symbols. That is, if a description of thinking may be couched in terms of logical processes (for further explication of these terms, see the next section), might these same processes be the equivalents of formal logical or algebraic procedures? If Peirce can relate thinking, in general, to cognitions whose interactions are describable in terms of formally manipulable inferential processes, the conclusion seems to be that this same cognition is based on brain physiology describable by, perhaps even employing, formalizable operations. At first glance, his argument seems to start very similarly to Searles Chinese Room argument (e.g., 1990). Searle, however, concludes, roughly, that mind cannot result solely from formal processes. Peirce seems to arrive, instead, at the opposite conclusion, that mind can in fact be realized through the operation of formalizable processes.
However, as will become clear, the above analysis is too superficial. Although Peirces position on the logic of thought and language differs radically from Searles, he arrives at a very similar conclusion. Searles argument, on the one hand, is based on an analysis of language and formal systems that assumes a clear distinction between syntax, i.e., rule-based operations on symbols which proceed independently of the symbols meaning, and semantics, i.e., operations based solely on the meanings of symbols. He maintains that computers are devices that exclusively employ syntactical operations. Given this point of view, Searle concludes that computers, since they can only employ syntactical rules, cannot think in the sense that a human being can, i.e., with semantics; since computers cannot assign meanings, they cannot possess understanding (the term "computer" in this essay will refer to Turing-equivalent digital computers unless otherwise stated). Peirce, on the other hand, as we shall see below, explicitly repudiates such a distinction. He was fascinated by the possibility of machine intelligence. Peirce's analysis of thought is radically different from Searles, in that he differentiates between three types of thought, all describable as variants of a basic syllogism; but only one is simple enough to be realized by computers. He thus comes to different conclusions about the nature of machines and thought: some machines, for Peirce, are extensions of peoples thought processes, and some are literally thinking machines. Yet both philosophers, ultimately, conclude that the computer will never be able to think in a fully human manner, i.e., employing the full range of human capacities. A comparison and contrast of these points of view will offer valuable insight, I believe, into the debate on computers and mind.
The Chinese Room argument
Searle, in his Chinese Room argument (1990; 1994, pp. 200-220) argues that computers cannot have mind, at least in the sense of understanding. Some background is necessary to clarify his argument. A digital computer is a device that carries out symbolic manipulation by changing the values and spatial distributions of some physical entity, such as the voltages in various elements of semiconductor circuitry, in accordance with the well-defined rules of some formal system. In the case of modern computers, that system is, ultimately, Boolean algebra, a logical calculus devised by the mathematician George Boole (1958). While the symbols' manipulations, realized as operations on the spatiality of voltages, must be driven and instantiated by physical substances and processes, those physical processes have no relationship except the most abstract to the symbolic processes that they realize, i.e., the processes of logic. That is, all computer programs, in whatever language, however "high-level" the programs are, will eventually be realized in the computer by Boolean algebra, a formal syntactical language, at the lowest level. Those high-level languages, then, are employed by researchers in the field of Artificial Intelligence (AI) to model the operations of thought through abstract representations which, because they must be run on a computer, will ultimately be translated into Boolean algebra. Searle's point is twofold: first, that in these systems (employing Boolean algebra, or any other formal logic), the meanings of the symbols are entirely arbitrary: syntax is independent of semantics, and systems rules are exclusively syntactical. Further, in the instantiation of this logic through the above physical processes, the relationship between any given physical quantity and the symbolic element it represents is also entirely arbitrary. In other words, the physical entities the voltages, in this example which comprise the functional elements of the device are, qua physical entities, irrelevant to the computer's function as a symbolic manipulator. In fact, it is quite possible, as Searle points out, to construct computers out of "cats and mice and cheese or levers or water " (1994, p. 207). As long as the dynamics of their relationships are constrained to correspond, at some level, to the syntactic relationships of symbolic logic, the actual physical realizations are irrelevant.
The Chinese Room argument, then, proceeds as follows: in a closed room sits a person, totally ignorant of Chinese, who receives, through a slot in the wall, cards with Chinese characters on them. The person goes to a rulebook and finds some rule relevant to the character (or the last few characters) just presented, and on the basis of that rule, picks some other Chinese character(s) from a pile, and passes it out through the wall. According to what Searle calls the "strong AI" position, "thinking is merely the manipulation of formal symbols" (1990, p. 26). Thus, if the rules are complete, according to that position, a Chinese speaker will be able to hold intelligent conversation with the "room": the totality of the operator, rules, and symbols. But Searle argues that there is no thing or person in the room that understands Chinese (nor does the room as a whole). Therefore, Searle concludes, even if that room could intelligently converse in Chinese a debatable point in itself, given, for example, Turing's demonstration that there are problems which are unsolvable by a Turing machine (1988, pp. 56-57) it does so mindlessly, with no possible basis for understanding the symbols. Since, Searle argues, computers operate in this same fashion, that is, solely on the basis of syntactic relations, they too are and must always be mindless. His point is that since syntactic rules are based on manipulations of symbols without reference to their meanings, and since symbols realized as arbitrary physical entities must be interpreted by a mind to render them meaningful, that is, "a physical state of a system is a computational state only relative to the assignment to that state of some computational interpretation" (p. 210), then one cannot generate mind from these arbitrarily instantiated formal processes. In digital computers, all that is happening is the creation and alteration of strings of symbols that must be subsequently interpreted by a "minded" human being. Thus, Searle notes, an observer who did not recognize Chinese characters, looking through a one-way window into the room, might understand the symbolic manipulations as stock-market formulas, and apply them consistently according to that interpretation (1990, p. 31).
Peirce, logic, and thought
Several important issues raised by the above argument concern the nature of formalizability, of manipulations of symbols, and of the various types of formal logic. I can only touch upon this subject as it relates to Peirce. Roughly speaking, according to Peirce (e.g., in Deduction, Induction, and Hypothesis), there are three basic types of logic, derived from the three-part syllogism. This syllogism consists of
R, a rule: (the beans in this bag are white),
C, a case of the rule: (these beans are from the bag),
E, a result: (these beans are white)
(1992a, p. 188). By altering the order of the elements in this expression, Peirce realized that one could symbolize entirely different types of thinking. Thus, deduction consists of statements in the above order: R, C, E; induction in the order C, E, R; and hypothesis construction (also termed "abduction" (e.g., Houser & Kloesel, 1991, p. xxxviii), the order R, E, C. Now the important point, as this relates to the Chinese Room argument and to the relationship between various types of logic and computers, is that computers function, through syntactic Boolean manipulations, solely as deductive devices. While computers may be programmed, then, at a high level of abstraction, to simulate induction, this simulation is accomplished through the ingenious use of complex deductive procedures, resolving ultimately to Boolean truth-tables, and the same holds for abduction, i.e., hypothesis formation.
Peirce held that the most important and creative aspect of the mind, i.e., "synthetic consciousness, or the sense of the process of learning" (1992b, p. 264), which is related to the type of thinking he termed "thirdness" (p. 260 and below) operates through a combination of habit and chance. It "has its physiological basis in the power of taking habits" (p. 264). In addition, however, he maintains that "it is essential that there should be an element of chance and that this chance or uncertainty shall not be entirely obliterated by the principle of habit" (p. 265). Thus for Peirce there seems to be a kind of balance between habit: fixed thinking and behavior, and chance or uncertainty: fluctuations in thought and behavior, producing variants in fixed habitual patterns. Thus, hypothesis formation and testing are in fact those aspects of thought which are least tractable to simulation, since they are not, because of the element of chance, algorithmic (more on this below).
Before proceeding further, it is necessary to elaborate on Peirce's classification of types of thinking. "Firstness" has to do with the consciousness of "immediate feeling" (1992b, p. 260), a consciousness only of the "fleeting instant," which once past, is "totally and absolutely gone" (p. 259). These instants run in a "continuous stream through our lives" (p. 42). "Secondness" is the consciousness in which the will appears, a consciousness "of something more than can be contained in an instant" (p. 260); the continuous stream of instants of thought begin to be combined by an "effective force behind consciousness" (p. 42). "Thirdness" "is the consciousness of process, and this in the form of the sense of learning, of acquiring the consciousness of synthesis" (p. 260). Peirce then goes on to speak of three different senses of thirdness.
The first is "accidental," and corresponds to "association by contiguity." This is interesting in its relation to behaviorism, to deductive logic, and also to our perception of space, for Peirce states that "we cannot choose how we will arrange our ideas in reference to time and space, but are compelled [by an] exterior compulsion" (p. 261). The second type of thirdness is "where we think different things to be alike or different" (p. 261); a thinking in which "we are internally compelled to synthetise them or sunder them association by resemblance" (p. 261). One is reminded of associational psychology, some aspects of cognitive psychology, and of inductive logic. The third type of thirdness is the highest, which the mind makes "in the interest of intelligibility by introducing an idea not contained in the data" (p. 261). Here we have the kind of thinking involved with hypothesis construction and testing, with science in general, and with art. Peirce states, "The great difference between induction and hypothesis is, that the former infers the existence of phenomena such as we have observed in cases which are similar, while hypothesis supposes something of a different kind from what we have directly observed" (p. 197). Peirce further states, "the work of the poet or novelist is not so utterly different from that of the scientific man" (p. 261). Habermas, commenting on abduction, states that, "what Peirce called 'percepts' depend upon those limiting cases of abductive inference which strike us in the form of lightning insights" (1995, p. 254).
We thus can relate the three types of syllogism to the different kinds of thirdness. Deduction is related to the lowest kind, the association by contiguity, since it can proceed solely through habit, without choice, and can, in addition, be performed by machines. Induction, since it involves ideas of similarity and difference, and association by resemblance, would seem to employ the second type of thirdness; whereas abduction must employ the last type. This is a much richer classification of logic and thought than Searle's (at least insofar as the Chinese Room argument is concerned), and it opens several questions as to the relationship between the two positions.
Peirce's stance on formal deductive logic is clearly set forth. He states that "formal logic centers its whole attention on the least important part of reasoning, a part so mechanical that it may be performed by a machine" (Ketner, 1988, p. 44). And we can now appreciate that this comes from his analysis of the type of thought involved in this process, that is, thirdness of the first type, where nothing external to the given data is required or arrived at, and the sequence of association is "externally compelled." To put this explicitly in terms of the computer, Peirces external compulsion is the equivalent of the specification of the steps of a procedure, i.e., to an algorithmic procedure, in which every step is determined in advance: a Turing machine. Even in high-level programs, programs which seem to enable the computer to make choices, the range of potential choices are predetermined; and the actual choices made are strictly determined by the values inputted into the computer as they are constrained to relate to the predetermined choices by the particular computer program. Thus every step is quite literally externally compelled where computers are concerned (even when one employs a "learning algorithm").
But another question remains, and that is Searle's: is this kind of thought "minded"? Does it require, or may it possess, understanding? To answer this question, we must address the differences between Searle and Peirce in their attitudes toward thought and toward formal languages. Searle regards syntax and semantics as inherently separable in formal languages, and thus the difference between purely formal operations on symbols with arbitrary meaning is clearly differentiated from operations on symbols possessing meaning, i.e., semantic operations. In addition, understanding, for Searle, involves mental processes which are entirely unconscious: brain-processes that cause "conscious phenomena" (1994, pp. 239-241) without themselves being aspects of those phenomena's content. These brain-processes can correspond to, i.e., produce the same effects as (although they are not identical with) syntactic rules.
Peirce, on the other hand, regards all sensation as conscious. Thus, the entities he calls "sensations" at the lowest level seem to be mental entities that might be termed "pre-qualia," indicating at least that, while we are not aware of these sensations as individuals over time, we are aware of them in the instant in which they are present to us. In his discussion on pp. 42-44 in Some Consequences of Four Incapacities, for example, those sensations are claimed to be "unanalyzable," and "something which we cannot reflectively know" (1992d, p. 42). However, as this quote indicates, although these sensations are, as I noted above, immediate, fleeting, and gone "absolutely" when their instant of perception has passed, during that instant, in contrast, they are present in our consciousness as mental contents. There is, in addition, "a real effective force behind consciousness" (p. 42) and note that this "force," which works on the sensations, while it seems unconscious, is still mental which unites the individual sensations in their "continuous stream" into those mental events and contents of which we are continuously aware. Peirce, then, seems to have as his starting point a class of entities which correspond, roughly, to nerve signals which have something like the status of pre-qualia mental contents, a type of entity that Searle denies, if I understand him correctly.
Peirce's position on symbols is related to his position on mental contents. In contrast to Searle, he recognizes no symbols without meaning, i.e., related only by syntactical rules. Quite the opposite, in fact. His extensive classification of types of signs there are up to sixty-six types (Houser & Kloesel, 1991, p. xxxvii) includes what may today be termed analog symbols: icons which correspond structurally in some relevant manner to that which they symbolize. Those icons, according to Peirce, are necessary for all thinking, including deductive logic. Thus, in speaking of errors which are made about the nature of deductive logic, he states, "one such error is [in thinking] that demonstrative reasoning is something altogether unlike observation [iconic representations] represent relations in the fact by analogous relation in the representation it is by observation of diagrams that the [deductive] reasoning proceeds" (1988, p. 41).
In addition, according to Ketner, Peirce uses the term "diagram" much more generally than in referring to pictures or pictograms, to refer to virtually all models verbal, pictorial, or mathematical as diagrams: "Indeed, 'diagram' for Peirce is roughly equivalent to a generalized notion of 'model'." (1988, p. 47). This position opens the question, so critical to the notion of analog representation, of the nature and level of correspondence between a model and its original. That is, in what sense, if any, does a drawing of a triangle represent that entity more directly than the set of equations specifying the line segments constituting the triangle? This question is another way of approaching the issue of the differences between Peirce and Searle. In maintaining that a biological system is necessary to instantiate thought processes, Searle is maintaining, in effect, that there is a difference between a certain type of physical instantiation of a model and a symbolic instantiation of the same model. Peirce, on the other hand, holds that there is no basic difference in the means with which models are instantiated as far as thought and meaning are concerned, rather, that it is the type of operations (e.g., deductive vs. abductive) that the system, however instantiated, performs on the model that distinguish levels of thought (see below for further development of this idea).
Peirce was explicitly stating that deductive reasoning, as performed by mathematicians, involves much more than syntactical operations, in fact, that the representations necessary for deductive reasoning are, in part at least, not, as icons, arbitrary symbols. Even though, for him, deductive reasoning is in part algorithmic, i.e., capable to some extent of being performed by machines, it necessitates, in toto, the highest levels of thought (e.g., "every kind of consciousness enters into cognition" [1992b, p. 260]). It is, then, not as a consequence of a fundamental difference in types of symbols and/or symbolic operations, with or without semantic content, but as a consequence of different levels of thought that Peirce argues for the non-computability of non-algorithmic logic.
Thus Peirce, in contrast to Searle, would not allow a separation between syntax and semantics. He would claim, I think, that what Searle is terming "syntactic rules" are in fact meanings in some basic sense, and that if those meanings were simple enough so that pure deduction, i.e., thinking (as he termed it) of the first type of thirdness, was all that was required, then a machine could indeed duplicate such "routine operations" (1992d, p. 43). For Peirce, the difference lay in the use of "fixed methods" of thought, in contrast to "self-critical formations of representations" (p. 46). "If a machine works according to a fixed principle unless it is so contrived that it would improve itself it would not be, strictly speaking a reasoning-machine" (p. 46). The difference for Peirce, then, was not in a syntax/semantics difference, but in a difference in "self-control": "routine operations call for no self-control" (p. 43), that is, routine operations, algorithmic procedures, entail no self-critical modifications, no learning from error, no hypothesis testing. There is, then, a sense in which even a digital computer has "mind" or "understanding." In fact, Peirce states, referring to the difference between mental modeling in reasoning and laboratory experimentation with apparatus, that "every machine is a reasoning machine, in so much as there are certain relations between its parts that were not expressly intended. A piece of apparatus for performing a physical or chemical experiment is also a reasoning machine [they are] instruments of thought, or logical machines" (p. 52).
Difficulties
Each of these positions has its own problems. Searle's is very compelling, but it is based on a notion of symbol manipulation which is foreign to Peirce's conception, and in addition, a notion of understanding or meaningfulness which, while seeming plausible, even obvious, becomes upon investigation, if not problematic, at least extremely complex. That is, in the Chinese Room argument, Searle maintains that "like a computer, I manipulate symbols, but I attach no meaning to the symbols" (1990, p. 26), and that in a computer, "symbols are manipulated without reference to any meanings" (p. 27). To maintain this, as I have mentioned, presupposes the clear separation of rule-governed operations, i.e., syntax, and meaning-governed operations, i.e., semantics. The conceptualization of logical and linguistic systems as possessing categories of operations differentiable in this manner, however, although having a long history in analytic philosophy, has recently been disputed by theories in cognitive linguistics, in which all linguistic operations are effectively semantic (e.g., Johnson, 1993; Lakoff, 1990). Natural language operations, in these theories, cannot be categorized in this manner, and neither, by extension, can operations in formal languages (see especially Lakoff and Núñez, 1996). Thus, for these theories, as for Peirces, in contrast with Searle's strict distinction between syntax and semantics, even the simplest manipulation of arbitrary symbols is, in some primitive sense, a manifestation of understanding.
Another objection to Searles position is that the brains basic processes and components are just as "mindless" as those of a computer. Thus, one might take the basic building blocks of the CNS, the individual neurons (roughly speaking) and use these as examples of mindless components which, when properly interacting, produce mind, just as the computers mindless 1s and 0s, or Boolean logic components, might, given this analogy. Searles response to this objection, would be, I think, that whether or not individual neurons are mindless, they instantiate meaning through their physical ("biological") structure in ways that arbitrary symbols cannot; and that this intrinsic structuring of the objective properties of neurons provides them with biases in selection and response that arbitrary symbols, however instantiated, cannot possess. The AI response might be that these selection and response biases could be programmed into a simulation of a neuron that would then behave functionally like an "objective" neuron. Peirces response, in turn, would perhaps be that the analog components of deductive symbolisms, the "diagrams," instantiate or imply the structuring analogous to the neurons physical structure. Doing justice to this debate is beyond the scope of this paper, but the fact that some philosophers, e.g., Dennett (1991), take the AI position, while Dreyfus (1993), for example, argues against it, is indication, at least, that there is substance to both of these arguments.
Peirce's ideas also have their problems. It is well-known that systems can be built with internal controls: thermostats, to take an extreme example, are self-correcting, and are very simple machines, not even computers. Thus "self-critical" systems of varying complexity can and are being built. Are they thinking, in any interesting (i.e., the second or third types of "thirdness") sense? In the case of a thermostat, this is clearly not the case: its behavior, although it involves negative feedback self-correction, in a sense is, nonetheless, "externally controlled," completely repetitive, in the sense that it can never originate new states and merely repeats old ones. Just what, precisely, however, does "self-criticality" entail in a machine? Feedback and recursive processes in computers have been implemented for decades; computers now learn, in some sense of the term. Peirce would reply, I think, that computer learning proceeds through algorithms; that is, that the determinants of the changes in the computer's programming are themselves fixed, and thus at this level the machine is not self-critical, i.e., it cannot originate its own programming. An opponent might answer that biological systems themselves are constrained by physiological parameters. Peirce's response, I believe, would be to invoke his principle of chance: "it is essential that there should be an element of chance in some sense as to how the cell shall discharge itself" (1992b, p. 265). A proponent of AI, however, might respond that in fact computers can and do use random number generators, for example, to produce just such an effect. Again, doing justice to this debate would require another essay.
Peirce and Searle: conclusions
What are the implications of the difference between Peirce and Searle? We have seen that for all the initial superficial similarities, there are profound differences between their attitudes toward symbolic systems and "thinking" machines. While Searle dismisses the possibility of computers which understand in anything like a human sense (unless those machines are somehow "biologically based"), Peirce does not. For Peirce, virtually any machine, first, is an extension of human thought, and second, in the case of calculating machines, already employs, to greater or lesser extent, some aspects of human thought and thus of human understanding. A Peircean "theorematic machine" (1988, p. 52), however, would require a kind of learning which Searle certainly, and Peirce probably, would regard as beyond the capabilities of the digital computer. One issue related to Peirces position concerns the boundaries between the various types of thinking. If there is a clear boundary between the first and second, or second and third type of "thirdness," that is, if there is at least some deductive thinking which does not partake of induction, then it might be possible to argue that although computers could perform deductive thinking, nevertheless they could not perform the full range of human thinking: could never cross that boundary. The argument might be that computers intrinsically require algorithmic deductive logic, and that even the introduction of randomness into their algorithms would not suffice to produce true induction, much less abduction, because human beings introduce variability and creativity, "self-correction," through processes which are neither random nor Turing-computable, and those processes are probably necessary for induction and certainly are for abduction. There are arguments to this effect (e.g., Siegelmann, 1999), but they have not been fully developed (i.e., integrated into the theoretical basis of AI) as yet.
At the very least, this discussion leads to one conclusion and some concrete directions for speculation. First, according to both positions, Searles and Peirces, for computers to mimic, or even to simulate in some interesting manner, human thought, the digital computer, as it stands, is probably insufficient. Given that conclusion, we might ask whether specific enough descriptions of the types of thought that need to be realized by computers, i.e., induction and abduction, are possible, that is, can one extrapolate to other types of devices which can perform operations that digital computers perform only with difficulty, or in the case of non-Turing mathematics, not at all?
To approach this in another way, let us ask the following: how would a general-purpose, syntax-only device have to operate? This device would have to use purely formal operations on a finite set of abstract symbols for which (possibly) infinite number of sets of specific meanings or values could be substituted. Given most current theories of syntax, the operations on the symbols would have to consist of no more than a finite set of well-defined, abstract logical processes, and the specific meanings, when substituted, equally well-defined. But this does describe Turing machines, and digital computers. Even when such a computer runs a particular program, the values at the addresses, duple numbers, have no necessary interpretations. Is it possible, by looking at the structure of the logical processes, to uniquely assign a specific interpretation to a specific logical process before the substitution of specific values for abstract symbols is made? This seems unlikely, since it assumes that such a uniquely substitutable logical process, however simple or complex, uniquely describes some aspect of reality. Thus these devices are subject to Searle's objections.
Then the opposite: what would a semantically based machine, a Peircean "theorematic" machine, look like? Its symbols would not only have specific intrinsic values but specific intrinsic meanings; in a sense they would not be symbols at all, but individual entities. Thus, it would not necessarily employ a finite set of well-defined operations, nor could it use abstract symbols. The operations on these entities would have to be specific to both the entities and the goals of the operations. But this describes an analog computing device: a slide rule or a brain.
The reasoning is this: the information about some entity - that which distinguishes that individual from others of the same class - is just exactly the original, "sensory," information about that entity. Any information abstracted from this is class information, even if for finite classes. Abstract information thus cannot be unique to individuals in the functional sense that it cannot retain that information which distinguishes individuals, and this is precisely what the term "abstract" means. Thus, since a computer deals only with abstract information, it can never have "meaning" in a human (or, less controversially, perhaps, in a Searlian) sense because it can never deal with individuals, only with classes. Even lists of labels of characteristics, however long or complete, cannot capture individuation. The members of such a list are abstractions. The usual counter-argument to this is that the totality of these abstractions, given that one list is different in at least one of its members from all other lists, will characterize an individual. However, since that characterization is performed utilizing, in fact, a list consisting solely of abstractions, to proceed from that level to the concrete, sensory level, that of individuals, cannot be accomplished, even given that such a list is of infinite length. Any list consisting only of class information, by its very nature, must be such that any of its members may always stand for any of a potentially infinite set of individuals.
It seems to be, then, that the retention of original differentiating information is the distinguishing factor between semantically- and syntactically- based systems, or at least between the theorematic machine and the digital computer. That is, it must be the case that the original differentiating information, when coded, contains at least some part of the code which is unique to an individual, in contrast to consisting of a list, however long, of abstract terms. A "concrete," "sensory" description must have some individual members: elements for which no substitutions can be made.
This implies utilizing either a pure analog or a mixed digital-analog system. This kind of system seems likely to be the only type which might conform to both Peirce's and Searle's requirements. These types of processes, containing both characterizations of individuals and abstract operations, would seem necessarily to be meaningful in Searle's sense, because no alternatives to a particular interpretation would be possible, and by the same token, they would at least in principle capable of "thirdness," since the potentially infinite wealth of individual information might encourage flexibility, even of processes normally algorithmic. These hybrid devices can, then, satisfy some of Peirces requirements for variability, through modification by the environment and internal self-correction (see also Cariani 1989), and may thus qualify as precursors, at least, of Peircean theorematic machines. Such systems are also "symbolically grounded" in Harnad's (1990) sense, for the same reasons. This conclusion would seem to support efforts to employ robots utilizing analog sensors and processors, further refined by digital operations, as long as those latter operations preserved the original analog information in some manner. There are interesting implications here for neural processing in the CNS as well.
References
1. Boole, G. (1958). An investigation of the laws of thought (2nd ed.). New York, NY: Dover Publications.
2. Cariani, P. (1989). On the design of devices with emergent semantic functions. Unpublished doctoral dissertation, State University of New York, Binghamton, NY.
3. Dennett, D. C. (1991). Consciousness explained (1st ed.). Boston, MA: Little, Brown and Company.
4. Dreyfus, H. L. (1993). What computers still can't do: a critique of artificial reason (2nd ed.). Cambridge, MA: The MIT Press.
5. Habermas, J. (1995). Peirce and communication. K. L. Ketner (1st ed., Vol. 1). New York, NY: Fordham University Press.
6. Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335-346.
7. Houser, N., & Kloesel, C. (1991). The essential Peirce: selected philosophical writings. Bloomington, IN: Indiana University Press.
8. Johnson, M. (1993). Moral imagination: implications of cognitive science for ethics (1st ed.). Chicago: The University of Chicago Press.
9. Ketner, K. L. (1988). Peirce and Turing: comparisons and conjectures. Semiotica, 68(1/2), 33-61.
10. Lakoff, G. (1990). Women, fire, and dangerous things (2nd ed.). Chicago, IL: The University of Chicago Press.
11. Lakoff, G., & Núñez, R. E. (1996). The metaphorical structure of mathematics: sketching out cognitive foundations for a mind-based mathematics. L. English . Hilldale, NJ: Erlbaum.
12. Peirce, C. S. (1992a). Deduction, Induction, and Hypothesis. N. Houser, & C. Kloesel (Eds.), The essential Peirce: selected philosophical writings (Vol. Ipp. pp. 186-199). Bloomington, IN: Indiana University Press.
13. Peirce, C. S. (1992b). A Guess at the Riddle. N. Houser, & C. Kloesel (Eds.), The essential Peirce: selected philosophical writings (Vol. Ipp. pp. 245-279). Bloomington, IN: Indiana University Press.
14. Peirce, C. S. (1992c). Questions Concerning Certain Faculties Claimed for Man. N. Houser, & C. Kloesel (Eds.), The essential Peirce: selected philosophical writings (Vol. Ipp. pp. 11-27). Bloomington, IN: Indiana University Press.
15. Peirce, C. S. (1992d). Some Consequences of Four Incapacities. N. Houser, & C. Kloesel (Eds.), The essential Peirce: selected philosophical writings (Vol. Ipp. pp. 28-55). Bloomington, IN: Indiana University Press.
16. Searle, J. R. (1990). Is the brain's mind a computer program? Scientific American, 262(1), 26-37.
17. Searle, J. R. (1994). The rediscovery of the mind (5th ed.). Cambridge, MA: The MIT Press.
18. Siegelmann, H. T. (1999). Neural networks and analog computation: beyond the Turing limit (1st ed.). (Progress in Theoretical Computer Science) . Boston, MA: Birkhauser Boston.
19. Wiener, P. P. (1951). Extracts from the New Essays on the Human Understanding (Leibniz: selections) . New York, NY: Charles Scribner's Sons.