Hauser, L. (February, 1993) Why Isn't My Pocket Calculator a Thinking Thing, Minds and Machines, Vol. 3, No. 1 , pp. 3-10.
Calculating is thinking.Not only do the premises seem true on their face, but obvious. The brunt of my argument will consist in considering various ways in which this argument might be challenged, and showing that the costs of each of the strategies proposed is insupportable. Either it turns out we ourselves can't be said to think or calculate if our performances are judged by the standards being proposed to rule out Cal; or else it turns out the standards are such that it is impossible to verify whether anything or anyone (save, perhaps, oneself) meets them.
Cal calculates.
Cal thinks.
The main objections I consider attempt to show this syllogism commits the fallacy of four terms on the grounds that "calculation" is only equivocally predicable of Cal; that Cal doesn't really calculate because Cal's performances lack features essential for genuine cognition or calculation. These features generally fall under the headings of four traditional "marks of the mental": consciousness, autonomy, intentionality, and unity. Of these, the last two underwrite the most serious challenges to Cal's claim to be calculating, and hence a thinking thing; yet in the end, I urge that neither of these objections is all that compelling either. Nowhere near so compelling as our original premises. What is amiss, I urge, is 'our intuition' that our pocket calculators don't think: I suggest that this intuition is as theory laden (and perhaps theology laden) as Descartes' 'intuition' that oysters don't have souls.
The general argumentative strategy is that no proposed criterion of thinking is acceptable if it's application leads to the conclusion that people don't think, or that we have no way of telling this, in cases where we think we know well enough that they do. The demand that criteria proposed be applied consistently to human and machine, and not selectively to machines is unexceptionable: otherwise one begs the question.
If a conscious or introspective sense of freedom is supposed to evidence freedom, the Autonomy Objection can bear no more weight than the appeal to consciousness it presupposes. Yet without such appeal, our own claims to be thinking could be no better grounded (and might be worse) than libertarian metaphysical doctrine. Yet my beliefs that I have beliefs, desires, and such -- even my belief that you do -- seem nowise so dubious as that.
What Dretske takes to be the missing ingredient -- what we have that computers lack -- are causal connections between the signs and the things they signify. Put crudely, the difference between my contentful belief that dogs are animals and a computer's 'representation' of this same information -- say by storing a Prolog clause [that says] animal(X):-dog(X) in RAM, is that my representation came to be, or could be, elicited by the actual presence -- the actual sights and sounds -- of dogs. It is these perceptually mediated connections between my tokenings of the English word "dog" and actual dogs that makes that word signify those animals for me; and it is for want of such connections that computer representations -- tokenings, say, of the Prolog atom dog -- lack such signification for the computer. Here Dretske sets out an Intentionality Objection that stands or falls independently -- as Searle's formulations, do not -- of appeals to consciousness.
Yet despite the superiority of Dretske's formulation (in providing a positive account of signification), it has two serious shortcomings. First, it has less force than Dretske seems to think, even in the most favorable cases, i.e., of signs for perceptible things such as dogs or (perhaps most favorably of all) perceptible qualities such as color or pitch. Second, not all words or signs are as rich in sensory associations as "red" or "dog". Where signs, such as numerals, represent abstractions such as numbers, it seems less plausible to think significance requires causal links to the things signified or any very robust causal-perceptual links with anything.
With regard to the first, most favorable cases, Dretske's account of reference threatens the conclusion that perceptually deficient humans are meaning deprived also. Presumably (on this view) someone blind from birth cannot signify anything by color words; nor the congenitally deaf by words like "pitch" and "music". Yet, I believe there are good reasons (see, e.g., Landau & Gleitman 1985) to hold that such persons can use words to refer to such things despite their lack of perceptual access to them. Someone blind from birth e.g., could know that Fire engines are red, and Red is a color, and even that "color" refers, roughly, to those qualities of things, deriving from their reflective properties, which the sighted see, but I don't, which stand in a similar relation to vision as tone to hearing. More dramatically, on Dretske's view, how are we to avoid the absurd consequence that most of the words in Helen Keller's books, though signifying things to us, signified nothing (or at least much less) to Helen Keller? Even for words with very considerable ostensible or sensuous content, the ability to apply these words to the world on the basis of this content seems less crucial to signification than Dretske's views suggest. Perhaps some causal-perceptual associations for some terms in one's lexicon are prerequisite for signifying anything by any term at all, but Helen Keller's case seems to suggest these connections needn't be so direct or robust as Dretske has to require to make his case against (present-day) machine understanding.
But whatever the upshot of these reflections on the most favorable cases for Dretske -- terms ordinarily rich in ostensive or sensuous associations -- plainly cases such as the numerals, are less favorable. Whatever weight Dretske's appeal to the perceptual disabilities of (present-day) digital computers has against their claims to mean anything by words such as "dog" and "red" this argument will be impotent to establish the more general claim Dretske wants -- that none of the symbols (present-day) computers process signify anything to these computers -- if Cal's understanding of the numerals he processes, e.g., is immune to such objections. How is Dretske's causal-perceptual account of reference supposed to work here, where the referents, numbers, don't seem capable either of being perceived or entering into causal relations.
At this point, the only plausible move for a causal-perceptual theory of reference seems something like this: in order to mean numbers by numerals one must be able to apply numerals to items and events in the world, e.g. in counting; so in order to mean two by '2', say, one must be capable of reliably tokening '2' when presented various pairs of objects or events. Yet even if this is correct, and causal-perceptual links are in some sense required for reference even to numbers, the senses of 'perceptual' and 'in the world' here, cannot be very robust. Someone in a state of total (external) sensory deprivation might still count their breaths, or even how many times they performed a carry in doing a bit of mental addition; and if this is all that's required, it's clear that it's not enough to rule out Cal. If all that's required "of any aspiring symbol manipulator is, in effect, that some of its symbols be actual signs of the conditions they signify, that there be some symbol-to-world correlations that confer on these symbols an intrinsic meaning" (Dretske 1985, p.29), then so long as Cal can count key presses and iterations of loops, this would seem causal-perceptual linkage enough to support his claim to mean numbers by his numeric tokens.
If this is right, it means the Intentionality Objection cannot be sustained across the board, with regard to all the symbols computers process. Perceptually impoverished as Cal is, it seems Cal has enough reality contact to support his claim to mean two by [the binary numeral] '10'.
On the other hand, while this may suffice to exclude hard-wired special purpose devices like Cal from the ranks of the thinking; it does not seem so effective against the claims of programmable machines, such as my lap top computer. Indeed, much of the deep philosophical interest of AI derives from the fact that programmable digital computers are in fact flexible, and even -- in the sense that "they can mimic any discrete state machine" (Turing 1950, p.441) -- universal instruments.
This presents us with a conundrum: suppose my lap top computer -- call her Sparky -- were programmed to emulate Cal: suppose Sparky computes the same arithmetical functions, by the same procedures, as Cal. Now it seems odd to say Sparky calculates, but Cal doesn't, just because Sparky has other abilities (or at least can be programmed to have other abilities). If both compute the same functions using the same algorithms aren't they -- in the sense relevant to cognitive attribution -- doing the same thing? Perhaps the Unity Objection, for all its traditional and intuitive warrant, is misguided. As Dretske remarks, "We don't, after all, deny someone the capacity to love because they can't do differential calculus. Why deny the computer the ability to solve problems or understand stories because it doesn't feel love, experience nausea, or suffer indigestion?" (Dretske 1985, p.24).
What the Unity Objection seems to require -- and offer no prospect that I can see of providing -- is some account not only of how many and which other mental abilities a thing must have in order to calculate (or think), but why. If Cal follows the same addition procedure as Sparky, and Sparky the same addition procedure as I, then it seems to me that Cal adds if I do; and when we do, "calculation" is predicable of both of us in exactly the same sense, regardless of whatever further mental abilities of mine Sparky lacks, or whatever further capacities of Sparky's are absent in Cal. Nor is it even essential that the procedures Cal and Sparky follow should emulate those I, or people generally, follow. This is not Searle's Brain Simulator reply but the reverse -- the CPU Simulator reply -- that it's enough that the procedure the Cal follows be one that I could follow (e.g. by hand simulating Cal's processing) and that in doing this I would be calculating.
Perhaps some special sense of "thinking" can be made out for which calculating is not sufficient -- perhaps some sense in which it's not sufficient to doubt or understand or will, etc; but in which it's necessary to (be able to) doubt and understand and will, etc. (as Descartes surely intended). Perhaps there is some sense in which "thinking" requires such unity, or universality of mental capacity -- or alternately some other traditional (or perhaps some non-traditional) mark(s) of the mental. At any rate -- whether or not such a sense of "thought" can be made out -- I have only claimed that Cal thinks in the ordinary generic sense of being a subject of at least one kind of contentful or mental state; not that he is a unified, or conscious, or autonomous self or soul or thinker in some special proprietary philosophical sense. I leave it opponent of AI to clarify what this sense is and make out the case, if it can be made, against Cal's thinking in this sense.
Descartes, R. 1646. "Letter to the Marquess of Newcastle, 23 November 1646". Translated in Anthony Kenny, Descartes Philosophical Letters. Clarendon Press: Oxford (1970). 205-208.
Descartes, R. 1642. Meditations on First Philosophy. Translated in J.Cottingham, R.Stoothoff, and D.Murdoch, The Philosophical Writings of Descartes, Vol.2. Cambridge University Press: Cambridge (1984). 16-23.
Dretske, F. 1985. "Machines and the Mental". Proceedings and Addresses of the American Philosophical Association (1985), Vol.59. 23-33.
Landau, Barbara and Gleitman, Lila. 1985. Language and Experience: Evidence from the Blind Child. Harvard University Press: Cambridge, MA.
Nagel, T. 1974. "What is it Like to Be a Bat?". Philosophical Review 83 (1974). 435-450.
Sartre, J. P. 1956. Being and Nothingness. Trans. H.Barnes. Citadel Press: Secaucus, NJ.
Searle, J. R. 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3 (1980). 417-424.
Turing, A. M. 1950. Computing machinery and intelligence. Mind LIX (1950). 433-460.
Wittgenstein, L. W. 1958. Philosophical Investigations. Basil Blackwell Ltd.: Oxford.