Hauser, L. (February, 1993) Why Isn't My Pocket Calculator a Thinking Thing, Minds and Machines, Vol. 3, No. 1 , pp. 3-10.


Why Isn't My Pocket Calculator a Thinking Thing

by
Larry Hauser
lshauser@aol.com
 

Abstract

My pocket calculator (Cal) has certain arithmetical abilities: it seems Cal calculates. That calculating is thinking seems equally untendentious. Yet these two claims together provide premises for a seemingly valid syllogism whose conclusion -- Cal thinks -- most would deny. I consider several ways to avoid this conclusion, and find them mostly wanting. Either we ourselves can't be said to think or calculate if our calculation-like performances are judged by the standards proposed to rule out Cal; or the standards -- e.g., autonomy and self-consciousness -- make it impossible to verify whether anything or anyone (save myself) meets them. While appeals to the intentionality of thought or the unity of minds provide more credible lines of resistance, available accounts of intentionality and mental unity are insufficiently clear and warranted to provide very substantial arguments against Cal's title to be called a thinking thing. Indeed, considerations favoring granting that title are more formidable than generally appreciated.

Introduction

The problem of Other Minds is not just about how one knows that other people are thinking or intelligent beings; it is also about how we know that rocks and roses aren't. From professor to paramecium to paperweight is a slippery slope; and a sense of this is one thing motivating Descartes' notorious views about the mindlessness of lower animals. Thus he defends his denial that any of them have any mental states at all (1646, p.208) on the grounds that "there is no reason for believing it of some animals without believing it of all, and many of them such as oysters and sponges are too imperfect for this to be credible." Similarly, I suppose, one reason for denying that any computers have any mental states at all is that there are likewise some of them, e.g., our pocket calculators, "too imperfect for us to believe it of them"; or conversely, it may be feared that if any computing device is rightly credited with any mental abilities at all -- even if 'only' the ability to add -- "then the barriers between mind and machine have been breached and there is no reason to think they won't eventually be removed." (Dretske 1985, p.25). Note, here, that the mental abilities we are tempted (if not practically compelled) to attribute to computers are precisely those higher rational abilities, e.g., calculation, which have traditionally been held to distinguish us from and exalt us above the lower animals; and it is ironic that so many contemporary arguments would deny mind to machines on the basis of their lack of faculties such as emotion and sense-perception that we share with the lower animals, when traditional arguments sought to deny soul and mind to these animals on the basis of their lack of faculties such as mathematical calculation which, more and more it seems, we are coming to share with computers.

Cal

My inquiry concerns the mental ability -- to calculate -- of my pocket calculator: call him Cal, for the sake of argument. I suppose most people's intuitions are that their pocket calculators don't think; yet we all allow -- or speak as if we allow -- that they add, subtract, multiply, and divide. In short, calculate. The trouble is that now, given the seemingly obvious thesis that calculation is thinking (indeed, a paradigm case), we have premises for a valid syllogism whose conclusion contradicts 'our' original intuitions.
Calculating is thinking.
Cal calculates.
Cal thinks.
Not only do the premises seem true on their face, but obvious. The brunt of my argument will consist in considering various ways in which this argument might be challenged, and showing that the costs of each of the strategies proposed is insupportable. Either it turns out we ourselves can't be said to think or calculate if our performances are judged by the standards being proposed to rule out Cal; or else it turns out the standards are such that it is impossible to verify whether anything or anyone (save, perhaps, oneself) meets them.

The main objections I consider attempt to show this syllogism commits the fallacy of four terms on the grounds that "calculation" is only equivocally predicable of Cal; that Cal doesn't really calculate because Cal's performances lack features essential for genuine cognition or calculation. These features generally fall under the headings of four traditional "marks of the mental": consciousness, autonomy, intentionality, and unity. Of these, the last two underwrite the most serious challenges to Cal's claim to be calculating, and hence a thinking thing; yet in the end, I urge that neither of these objections is all that compelling either. Nowhere near so compelling as our original premises. What is amiss, I urge, is 'our intuition' that our pocket calculators don't think: I suggest that this intuition is as theory laden (and perhaps theology laden) as Descartes' 'intuition' that oysters don't have souls.

Consciousness

The argument from consciousness, holds that the essence of thinking is its subjectivity: there must be something that it's like to be a pocket calculator for the calculator, or else it's not really calculating. The trouble with this objection is that it can't be substantiated just how far (beyond myself) this mysterious 'inner light' of consciousness extends. This "other minds" reply does not, as Searle (1980, p.422) jeers, "feign anesthesia": it only requires critics of AI to consistently apply the criterion they propose to disqualify computers' claims to think. What the Other Minds Reply says is that if consciousness were our basis for deciding whether any intelligent seeming thing was really a thinking subject, then one should have skeptical doubts about other minds. So, if we don't, and shouldn't, seriously entertain such doubts, this seems to show that we don't (or shouldn't) appeal to consciousness to decide what is and isn't thinking.

The general argumentative strategy is that no proposed criterion of thinking is acceptable if it's application leads to the conclusion that people don't think, or that we have no way of telling this, in cases where we think we know well enough that they do. The demand that criteria proposed be applied consistently to human and machine, and not selectively to machines is unexceptionable: otherwise one begs the question.

Autonomy

Autonomy objections, like the Objection from Consciousness, touch some deep chord; and such objections -- that computers lack freedom or wills of their own, that they "can only do what we tell them" -- are among the most frequently heard arguments against artificial intelligence. Autonomy, as a criterion for distinguishing genuine from apparent cognition, faces problems akin to those which arise for consciousness. If one appeals to introspection -- if our awareness of and basis for attributing free will or autonomy is supposed to be phenomenological -- then autonomy objections inherit all the problems of the Objection from Consciousness: it would be impossible to know (or even have justified belief) that anything or anyone (save oneself) really is a thinking subject of mental states. On the other hand, if we reject the phenomenological criterion of autonomy, as I suspect we should, the Autonomy Objection becomes even less supportable. With consciousness of autonomy as the criterion of autonomy, it seems we can never know that anyone else has it: without consciousness it seems we can't even know that we ourselves do. Note that the burden of the Autonomy Objector here is not just to show that there are free acts in some strong libertarian sense -- though this would be onerous enough -- but to show that certain acts of ours are free in this sense, and no acts of computers are, or (presumably) ever could be. I have no idea how -- without appeal to one's introspective sense of freedom as evidence of actual freedom -- one might propose to discharge this burden.

If a conscious or introspective sense of freedom is supposed to evidence freedom, the Autonomy Objection can bear no more weight than the appeal to consciousness it presupposes. Yet without such appeal, our own claims to be thinking could be no better grounded (and might be worse) than libertarian metaphysical doctrine. Yet my beliefs that I have beliefs, desires, and such -- even my belief that you do -- seem nowise so dubious as that.

Intentionality

A third line of objection to the claim that computers generally -- and Cal, in particular -- really think, appeals to the intentionality of mental states. The Intentionality Objection, as Searle (1980) puts it, is that the symbols or information computers process are only symbolic of something, or information about anything, to us; that they are not of or about anything to the computer. So stated, the Intentionality Objection threatens to collapse into the Consciousness Objection -- if the difference between my calculation and Cal's 'calculation' is just supposed to be that there is something that it's like for me to calculate that 2+9 is 11, but nothing that it's like for Cal. Note how closely what Searle says about the computer's relations to the symbols and information it processes -- that they're only symbolic and informative for us not for the computer -- echoes formulations such as Nagel's (1974) or Sartre's (1956) about what it is to be conscious. To go on from this to deny the possibility of any behavioral or public tests of intentionality (to deny these inward states any outward criteria), as Searle seems to do, really does seem to make the Intentionality Objection into a (species of) Consciousness Objection. To avoid this collapse one has to indicate factors (besides consciousness) that distinguish unthinking 'syntactic' manipulations from contentful thought; and the factors in question must be public, or observable --the sorts of things Wittgenstein referred to as "outward criteria" (Wittgenstein 1958, 580). For this reason Dretske's (1985) version of the Intentionality Objection, has much to recommend it over Searle's.

What Dretske takes to be the missing ingredient -- what we have that computers lack -- are causal connections between the signs and the things they signify. Put crudely, the difference between my contentful belief that dogs are animals and a computer's 'representation' of this same information -- say by storing a Prolog clause [that says] animal(X):-dog(X) in RAM, is that my representation came to be, or could be, elicited by the actual presence -- the actual sights and sounds -- of dogs. It is these perceptually mediated connections between my tokenings of the English word "dog" and actual dogs that makes that word signify those animals for me; and it is for want of such connections that computer representations -- tokenings, say, of the Prolog atom dog -- lack such signification for the computer. Here Dretske sets out an Intentionality Objection that stands or falls independently -- as Searle's formulations, do not -- of appeals to consciousness.

Yet despite the superiority of Dretske's formulation (in providing a positive account of signification), it has two serious shortcomings. First, it has less force than Dretske seems to think, even in the most favorable cases, i.e., of signs for perceptible things such as dogs or (perhaps most favorably of all) perceptible qualities such as color or pitch. Second, not all words or signs are as rich in sensory associations as "red" or "dog". Where signs, such as numerals, represent abstractions such as numbers, it seems less plausible to think significance requires causal links to the things signified or any very robust causal-perceptual links with anything.

With regard to the first, most favorable cases, Dretske's account of reference threatens the conclusion that perceptually deficient humans are meaning deprived also. Presumably (on this view) someone blind from birth cannot signify anything by color words; nor the congenitally deaf by words like "pitch" and "music". Yet, I believe there are good reasons (see, e.g., Landau & Gleitman 1985) to hold that such persons can use words to refer to such things despite their lack of perceptual access to them. Someone blind from birth e.g., could know that Fire engines are red, and Red is a color, and even that "color" refers, roughly, to those qualities of things, deriving from their reflective properties, which the sighted see, but I don't, which stand in a similar relation to vision as tone to hearing. More dramatically, on Dretske's view, how are we to avoid the absurd consequence that most of the words in Helen Keller's books, though signifying things to us, signified nothing (or at least much less) to Helen Keller? Even for words with very considerable ostensible or sensuous content, the ability to apply these words to the world on the basis of this content seems less crucial to signification than Dretske's views suggest. Perhaps some causal-perceptual associations for some terms in one's lexicon are prerequisite for signifying anything by any term at all, but Helen Keller's case seems to suggest these connections needn't be so direct or robust as Dretske has to require to make his case against (present-day) machine understanding.

But whatever the upshot of these reflections on the most favorable cases for Dretske -- terms ordinarily rich in ostensive or sensuous associations -- plainly cases such as the numerals, are less favorable. Whatever weight Dretske's appeal to the perceptual disabilities of (present-day) digital computers has against their claims to mean anything by words such as "dog" and "red" this argument will be impotent to establish the more general claim Dretske wants -- that none of the symbols (present-day) computers process signify anything to these computers -- if Cal's understanding of the numerals he processes, e.g., is immune to such objections. How is Dretske's causal-perceptual account of reference supposed to work here, where the referents, numbers, don't seem capable either of being perceived or entering into causal relations.

At this point, the only plausible move for a causal-perceptual theory of reference seems something like this: in order to mean numbers by numerals one must be able to apply numerals to items and events in the world, e.g. in counting; so in order to mean two by '2', say, one must be capable of reliably tokening '2' when presented various pairs of objects or events. Yet even if this is correct, and causal-perceptual links are in some sense required for reference even to numbers, the senses of 'perceptual' and 'in the world' here, cannot be very robust. Someone in a state of total (external) sensory deprivation might still count their breaths, or even how many times they performed a carry in doing a bit of mental addition; and if this is all that's required, it's clear that it's not enough to rule out Cal. If all that's required "of any aspiring symbol manipulator is, in effect, that some of its symbols be actual signs of the conditions they signify, that there be some symbol-to-world correlations that confer on these symbols an intrinsic meaning" (Dretske 1985, p.29), then so long as Cal can count key presses and iterations of loops, this would seem causal-perceptual linkage enough to support his claim to mean numbers by his numeric tokens.

If this is right, it means the Intentionality Objection cannot be sustained across the board, with regard to all the symbols computers process. Perceptually impoverished as Cal is, it seems Cal has enough reality contact to support his claim to mean two by [the binary numeral] '10'.

Unity

Since Plato, at least, unity has been advanced as a distinguishing attribute of minds: the idea is that minds are not composed of parts (as bodies are), but are rather indissoluble units; a claim that not only underwrites perhaps the most important traditional argument for the immortality of the soul, but which has been continually in the background of our whole discussion to this point. Each of the various objections we considered makes some tacit appeal to unity: each claims that disconnected from some further mental abilities or events (consciousness, or autonomy, or intentionality) Cal's seeming calculation is not really calculation at all. What distinguishes these abilities as "marks of the mental" deserving special treatment -- as compared to, say, the ability to enjoy the taste of strawberries -- is just their greater intuitive and traditional claims to being uneliminable aspects of all mental activities or states. Having been unsuccessful in sustaining objections to Cal's claim to calculate on the basis of any of these specific abilities we turn now to a nonspecific form of this whole general line of objection. With regard to Cal's calculation, a proponent of the Unity Objection might respond that if calculation is the only cognition-like thing Cal can do, then it's not thinking, and perhaps not really even calculation. The Unity Objection, rather than claiming that some specific mental ability is necessary for thought or calculation, claims what is essential to something's thinking, or even calculating, is having enough interconnected mental abilities of various sorts. Even if we stop short of the strong Cartesian demand that thought must be "a universal instrument which can serve for all kinds of situations" (Descartes 1637, p.140), perhaps we can at least require that would be thought processes should be flexible instruments which can serve for various situations. This seems sufficient to exclude Cal without excluding me, or you, or Helen Keller.

On the other hand, while this may suffice to exclude hard-wired special purpose devices like Cal from the ranks of the thinking; it does not seem so effective against the claims of programmable machines, such as my lap top computer. Indeed, much of the deep philosophical interest of AI derives from the fact that programmable digital computers are in fact flexible, and even -- in the sense that "they can mimic any discrete state machine" (Turing 1950, p.441) -- universal instruments.

This presents us with a conundrum: suppose my lap top computer -- call her Sparky -- were programmed to emulate Cal: suppose Sparky computes the same arithmetical functions, by the same procedures, as Cal. Now it seems odd to say Sparky calculates, but Cal doesn't, just because Sparky has other abilities (or at least can be programmed to have other abilities). If both compute the same functions using the same algorithms aren't they -- in the sense relevant to cognitive attribution -- doing the same thing? Perhaps the Unity Objection, for all its traditional and intuitive warrant, is misguided. As Dretske remarks, "We don't, after all, deny someone the capacity to love because they can't do differential calculus. Why deny the computer the ability to solve problems or understand stories because it doesn't feel love, experience nausea, or suffer indigestion?" (Dretske 1985, p.24).

What the Unity Objection seems to require -- and offer no prospect that I can see of providing -- is some account not only of how many and which other mental abilities a thing must have in order to calculate (or think), but why. If Cal follows the same addition procedure as Sparky, and Sparky the same addition procedure as I, then it seems to me that Cal adds if I do; and when we do, "calculation" is predicable of both of us in exactly the same sense, regardless of whatever further mental abilities of mine Sparky lacks, or whatever further capacities of Sparky's are absent in Cal. Nor is it even essential that the procedures Cal and Sparky follow should emulate those I, or people generally, follow. This is not Searle's Brain Simulator reply but the reverse -- the CPU Simulator reply -- that it's enough that the procedure the Cal follows be one that I could follow (e.g. by hand simulating Cal's processing) and that in doing this I would be calculating.

Conclusion

What the preceding arguments show -- I take it -- is that none of the four traditional marks of the mental considered provide a supportable basis for denying that Cal calculates in the same sense as you or I; i.e., I have sought to show that our initial syllogism does not commit the fallacy of four terms by equivocating on "calculates", its middle. I will conclude by remarking why the argument -- at least as I intend it, and on its least tendentious reading -- doesn't equivocate on its major, "thinks", either. Ordinarily "think" is a generic term for any of several different mental activities or states. According to Descartes a thing that thinks is "a thing which doubts, understands, affirms, denies, is willing, is unwilling, and also imagines and has sensory perceptions" (1642, p.19); and similarly, my dictionary (Webster's New Collegiate), under "think", mentions conceive, judge, consider, surmise, expect, determine, resolve, reason, intend, purpose, reflect, infer, opine and decide. In this ordinary generic sense of the term, I take it, it's undeniable that calculating is thinking, and -- if my arguments are sound -- that my pocket calculator calculates and consequently thinks.

Perhaps some special sense of "thinking" can be made out for which calculating is not sufficient -- perhaps some sense in which it's not sufficient to doubt or understand or will, etc; but in which it's necessary to (be able to) doubt and understand and will, etc. (as Descartes surely intended). Perhaps there is some sense in which "thinking" requires such unity, or universality of mental capacity -- or alternately some other traditional (or perhaps some non-traditional) mark(s) of the mental. At any rate -- whether or not such a sense of "thought" can be made out -- I have only claimed that Cal thinks in the ordinary generic sense of being a subject of at least one kind of contentful or mental state; not that he is a unified, or conscious, or autonomous self or soul or thinker in some special proprietary philosophical sense. I leave it opponent of AI to clarify what this sense is and make out the case, if it can be made, against Cal's thinking in this sense.

References

Descartes, R. 1637. Discourse on Method. Translated in J.Cottingham, R.Stoothoff, and D.Murdoch, The Philosophical Writings of Descartes, Vol.1. Cambridge University Press: Cambridge (1985). 131-141.

Descartes, R. 1646. "Letter to the Marquess of Newcastle, 23 November 1646". Translated in Anthony Kenny, Descartes Philosophical Letters. Clarendon Press: Oxford (1970). 205-208.

Descartes, R. 1642. Meditations on First Philosophy. Translated in J.Cottingham, R.Stoothoff, and D.Murdoch, The Philosophical Writings of Descartes, Vol.2. Cambridge University Press: Cambridge (1984). 16-23.

Dretske, F. 1985. "Machines and the Mental". Proceedings and Addresses of the American Philosophical Association (1985), Vol.59. 23-33.

Landau, Barbara and Gleitman, Lila. 1985. Language and Experience: Evidence from the Blind Child. Harvard University Press: Cambridge, MA.

Nagel, T. 1974. "What is it Like to Be a Bat?". Philosophical Review 83 (1974). 435-450.

Sartre, J. P. 1956. Being and Nothingness. Trans. H.Barnes. Citadel Press: Secaucus, NJ.

Searle, J. R. 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3 (1980). 417-424.

Turing, A. M. 1950. Computing machinery and intelligence. Mind LIX (1950). 433-460.

Wittgenstein, L. W. 1958. Philosophical Investigations. Basil Blackwell Ltd.: Oxford.


Reply to Rapaport: Reply to William J. Rapaport's Commentary
 Email me:: Your Comments. Back to: Homepage; CurriculumVitae.