Hauser, L (May 1993) Reaping the Whirlwind, Minds and Machines, Vol. 3, No. 2, pp. 219-238.


Reaping the Whirlwind:

Reply to Harnad's `Other Bodies, Other Minds'

Larry Hauser

Abstract: Harnad's proposed "robotic upgrade" of Turing's Test (TT), from a test of linguistic capacity alone to a Total Turing Test (TTT) of linguistic and sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is "no evidence" [p.45] of consciousness besides "private experience" [p.52]. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from "as if" thought on the basis of (presence or lack of) consciousness (thus rejecting Turing (behavioral) testing as sufficient warrant for mental attribution) has the skeptical consequence Harnad accepts -- "there is in fact no evidence for me that anyone else but me has a mind" [p.45]. I disagree with his acceptance of it! It would be better to give up the neo-Cartesian "faith" [p.52] in private conscious experience underlying Harnad's allegiance to Searle's controversial Chinese Room "Experiment" than give up all claim to know others think. It would be better to allow that (passing) Turing's Test evidences -- even strongly evidences -- thought.

Keywords: Animal intelligence, artificial intelligence, causation, consciousness, Chinese Room Experiment, Descartes, intentionality, other-minds problem, Searle, subjectivity, symbol grounding, Turing, Turing's Test.

Prècis of Stevan Harnad's "Other Bodies, Other Minds"

In place of "Turing's original `pen-pal' version of the Turing Test (the TT) [which] only tested for linguistic capacity" [p.42, abstract] Harnad proposes a more stringent Total Turing Test (TTT) of "linguistic and robotic [sensorimotor] capacities" [p.42, abstract] . To pass Harnad's Total Turing Test, "The candidate must be able to do, in the real world of objects and people, everything that real people can do, in a way that is indistinguishable (to a person) from the way real people do it" [p.44]. Harnad offers three arguments in support of this proposal.

(1) Harnad offers intuitive reasons for insisting on the stronger Total Turing Test. Since "our only basis for judging that other people have minds is that they behave indistinguishably from ourselves" [p.46] and "people can do a lot more than just communicating verbally by teletype" [p.44], the TTT, according to Harnad, "just amounts to calling a spade a spade" [p.46]. It "turns out to be no less (nor more) exacting a test of having a mind than the means we already use with one another in our everyday practical solutions to the `other minds' problem" [p.46]. The TTT, Harnad claims, is "just a more rigorous and systematic version of the same test" [p.49] we ordinarily apply in our working attributions of mental properties to others.

(2) Harnad offers scientific reasons for preferring his more stringent test (TTT) to Turing's less restrictive version (TT). Where TTT "constrains mind-modeling to normal scientific degrees of freedom, thereby maximizing the likelihood of converging on the true necessary and sufficient physical conditions of having a mind" [p.44], Harnad contends, the original Turing Test, by settling for something "short of total performance capacity" [p.44], by settling for "just communicating verbally by teletype" [p.44], "would ... increase the degrees of freedom of Turing Testing far beyond those of the normal underdetermination of scientific theories by their data" [p.44]. In this connection, Harnad also conjectures that "the constraints of the TTT itself provide the most likely route to a discovery of any functional modules (if they exist) that may underlie [linguistic] behavioral capacity" [p.45] because "our linguistic capacity must be ... grounded in our robotic capacity" [p.47].

(3) Harnad offers a philosophical reason, deriving from John Searle's Chinese Room Experiment, for preferring TTT to Turing's original "pen pal" test. By "calling for both linguistic and robotic capacity" [p.49], TTT is rendered "immune to Searle's Chinese Room Argument" [p.49], though Searle's argument, Harnad thinks, proves the insufficiency of TT. That the addition of sensorimotor capacities to the system "can foil Searle's argument" [p.49], Harnad concludes, "points out ... that symbol manipulation is not all there is to mental functions and that the linguistic version of the Turing Test just isn't strong enough, because linguistic communication could in principle (though perhaps not in practice) be no more than mindless symbol manipulation" [p.50].

Yet, despite the intuitive, scientific, and philosophical advantages Harnad claims for his robotic upgrade, he is "still not entirely comfortable with equating having a mind with having the capacity to produce Turing-indistinguishable bodily performance" [p.51], however total. Even though "there is no stronger test, short of being the candidate" [p.49], the TTT is "just a more rigorous and systematic version of the same test [i.e., of TT]" [p.49]. TTT, on Harnad's view, is just as incapable (as TT) of distinguishing systems that really have "private experiences such as pain, joy, doubt, determination, and understanding, that we each know exactly what it's like to have" [p.45] from systems that might "just be behaving exactly as if they had a mind -- exactly as I would under the conditions -- but without experiencing anything" [p.45]. Since such private subjective experiences are what mental states and processes really are, on Harnad's view, "only the candidate itself can know (or not know, as the case may be) whether we're really right in thinking that it has a mind" [p.52]. Harnad unblinkingly concludes, "There is in fact no evidence for me that anyone [or anything] else but me has a mind" [p.45]; "there's no way I can know" [p.45].

Summary Response

Surely, skepticism about other minds is a conclusion before which we should blink. If the metaphysical doctrine that mental states and processes are "private, subjective experiences" [p.45], with its epistemological corollary that "I can never check out whether anyone else but me has" [p.45] such states, leads to the conclusion that there is "in fact no evidence for me that anyone else but me has a mind" [p.45], so much the worse for such neo-Cartesian doctrine.{2} So much the better for Turing's Test. Searle's highly touted Chinese Room Experiment hasn't the force (as a counterexample to the sufficiency of Turing's Test) that Harnad thinks. If the persuasiveness of Searle's "experiment" depends (as Harnad sees it does) on the same dubious neo-Cartesian assumptions of ontological subjectivity and epistemic privilege that drive Harnad to skepticism about other minds, then Searle's experiment cannot justify these neo-Cartesian assumptions in the face of the wildly counterintuitive skeptical consequences Harnad thinks (I believe, correctly) these assumptions have.

Section 1, below, contests Harnad's claim that his more stringent Total Turing Test better captures our intuitions and practices concerning attribution of mental properties than Turing's Test does. We are generally guided in our everyday working attributions of mental properties by less exacting considerations than those embodied in Turing's Test. Considerations answering to what can be called "partial Turing Tests" lead us to attribute (or refrain from attributing) specific mental properties to things displaying relevant performance capacities quite independently of their possession of many (much less most, or all) of the capacities normal human adults have.

Section 2 shows that the scientific arguments Harnad takes to favor his more stringent TTT are doubly incoherent. First, if (passing) TTT really provides "no evidence" of thought -- if it doesn't provide either "certainty (as in mathematics) or even high probability on the available evidence, as in science" [p.45] -- it cannot sensibly be maintained also that "TTT constrains mind-modelling to the normal degrees of freedom, thereby maximizing the likelihood of converging on the true necessary and sufficient physical conditions of having a mind" [p.44]. Second, if Harnad's conjecture that "linguistic capacity must be ... grounded in ... robotic capacity" [p.47] is correct, then Harnad's more stringent test of "linguistic and robotic capacities" [p.42] is unnecessary. Turing's original test of linguistic capacity will suffice to test robotic capacities as well (since linguistic capacities presuppose robotic capacities, on Harnad's conjecture). But if Harnad's conjecture is not correct, then his more stringent TTT will not be a valid test of linguistic capacity or understanding. It won't be necessary for a system to be able to do all (or most, or even many) of the "things a real person can do" in order to do any of the mental things people do, as Harnad's TTT supposes.

Section 3 argues that Harnad's Total Turing Test is no more immune to Searle's Chinese Room Experiment than the original Turing Test (TT) -- or rather that TT is no less immune. Harnad shows (1) that "the difference between `real' and `as-if' intentionality" [p.53, note 3] on which Searle's "refutation" of Turing's Test depends is "entirely parasitic" [p.53, note 3] on wholesale neo-Cartesian identification of thought with consciousness or "subjective experience"; and (2) that we have independent reason to reject such identification, i.e., that it leads unavoidably to the unacceptable skeptical conclusion (which Harnad accepts!) that "there's no way I can know" [p.45] "that anyone else but me has a mind" [p.45]. Taken together, (1) and (2) invalidate Searle's experiment as a counterexample to either TT or TTT. Special considerations Harnad adduces on behalf of TTT do not provide it any additional measure of immunity to Searle's putative counterexample.

Section 4 defends Harnad's arguments (and offers further support) for the two key contentions just bruited: (1) that Searle's argument depends on neo-Cartesian assumptions of ontological subjectivity (that private subjective experiences are what mental phenomena essentially are) and privileged access (that my private introspective awareness or sincere avowal of my own mental states, or lack thereof, has the privilege of overriding all public, behavioral evidence to the contrary); and (2) that acceptance of these contentions is inconsistent with our claims to know what (and even whether) anyone or anything else thinks (understands, feels, wants, etc.). Against Harnad, I urge that this skeptical outcome is unacceptable and that neo-Cartesian assumptions of ontological subjectivity and privileged access ought, for this reason, to be rejected.

1. "Turing Tests" and Cartesian Intuitions

1.1. Intuitions and Mental Attributions

Harnad's TTT does not just amount to "calling a spade a spade" [p.46]. It's more like calling a steam shovel a spade! If the issue is how we go about making working attributions of mental properties in everyday life (e.g., "DOS detects keypresses" and "Fido wants to go out"), it seems our bases for making such attributions aren't more stringent than Turing's Test, but less stringent. We make mental predications of things such as infrahuman animals, children, and computers which can't pass either Turing's Test (TT) or Harnad's more exacting Total Turing Test (TTT). What we actually rely on in practice seem to be partial Turing Tests of limited competencies taken to be associated with specific mental abilities. Consequently, Harnad's "intuition that a convincing computer model has to be able to do many (perhaps most, perhaps all) of the things a real person can do" [p.42] does not "strike a proper chord" [p.42] in me at all! It reminds me of Professor Jefferson (Turing's adversary in a 1951 BBC radio series on AI) who "wanted to say that he would not believe a computing machine could think until he saw it touch the leg of a lady computing machine" [Hodges (1983), p.452].{3} I am more inclined to agree with Dretske: "We don't, after all, deny someone the capacity to love because they can't do differential calculus. Why deny the computer the ability to solve problems or understand stories because it doesn't feel love, experience nausea, or suffer indigestion?" [Dretske (1985), p.23]

By insisting that nothing has any mental capacities unless it has a very considerable proportion of all the capacities a normal human adult has, Harnad's Total Turing Test (TTT) imposes a standard that excludes not only computers but infrahuman animals. TTT guarantees results that accord with Harnad's "intuition" about computers -- "nobody home" [p.52]. However, it also guarantees results that run counter to intuitions about animals of other species Harnad himself seems to share.{4} Indeed, since "the TTT ... among other things, calls for seeing and understanding in the same candidate," it even threatens to exclude many adult human beings -- e.g. the blind, the deaf, and the paralyzed -- from the ranks of thinking things due to their sensory or motor deficits.

The preceding suggests Harnad might attempt to respond to the complaint that his TTT seems to have the absurd Cartesian consequence that no infrahuman animal has any mental properties at all{5} much as he responds to the critic who would "hasten to remind [him] that blind people have minds too" [p.50]: "It's more advisable methodologically to capture the normal case first, before trying to model its pathologies" [p.50]. Perhaps Harnad would hold that the minds of animals are degenerate cases also -- the normal and paradigm case being a fully competent human adult mind. Given the vast assortment of sensorimotor and linguistic abilities various humans (and infrahuman animals) have, however, it seems the "normal" case is "degeneracy". Compared to many of the blind, most of the sighted are hearing impaired; compared to dogs, humans are smell impaired; compared to bats, we're echolocation impaired; etc. Given the virtually unlimited variety of "things a real person can do," no human individual can do most (much less all) of "the things a real person can do."{6}

Ordinary working attributions of mental properties to computers (and the intuitions that school them) support the contention that computers have such properties (i.e., "think" or "have minds").{7} It's only when our language "goes on holiday" [Wittgenstein (1958), 38, Wittgenstein's italics] and we begin to worry about the metaphysical implications of our working attributions of mental properties to machines that the "intuitions" Harnad commends are apt to obtrude.

When prospecting for intuitions, we should prefer a field which is not too much trodden into bogs by traditional philosophy, for in that case even "ordinary" language will often have become infected with the jargon of extinct theories, and our own prejudices too, as the upholders and imbibers of theoretical views, will be too readily, and often insensibly, engaged. [Austin (1957), p.384]
In the present case, everyday working attributions of mental properties are the "field which is not too much trodden into bogs by traditional philosophy": "intuitions" Harnad discovers by attending to metaphysical speculations occasioned by such attributions are prejudices he and like minded others [e.g., Searle (1990), Nagel (1976)] partake of as imbibers and upholders of neo-Cartesian metaphysical and epistemological views.

1.2. "Turing" Tests

In his Discourse on Method, Descartes proposes two tests for mind, two "sure and certain means" of determining whether anything -- another person, an infrahuman animal, or even a machine that "had the organs and outward shape" of a human and "imitated our actions as closely as possible for all practical purposes" -- really has the mental properties it acts as if it has. [Descartes (1637), p.140]

The first [test] is that they could never use words, or put together other signs, as we do in order to declare our thoughts to others. [Descartes (1637), p.140]
Secondly, even though such machines [or animals] might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they were not acting through understanding but only from the disposition of their organs. [Descartes (1637), p.140]
Descartes's first (language) test presages Turing's.{8} Descartes's description of what this test requires, "that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do" [Descartes (1637), p.140], is virtually indistinguishable from Harnad's description of the classic Turing Test (TT): "you could communicate ... talking about anything you liked ... and you could never tell which one was human and which one was the machine" [Harnad (1991), p.44]. Turing himself remarks, "The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include" [Turing (1950), p.435]. Likewise, Harnad's "Total Turing Test", requiring the candidate "to do, in the real world of objects and people, everything that real people can do, in a way that is indistinguishable (to a person) from the way real people do it" [Harnad (1991), p.44], echoes Descartes's characterization of his second, behavior, test as a test of the candidate's ability "to act in all the contingencies of life in the way in which our reason makes us act." [Descartes (1637), p.140]

Not only is the Total Turing Test (TTT) Harnad proposes virtually indistinguishable from Descartes's second (behavior) test, Harnad's motive for proposing it, or the use to which he intends it to be put, seems similar also. Descartes sought to deny subhuman beasts any mental properties. Harnad seems concerned to deny computers any for the foreseeable future. Befitting his exclusionary purpose, Descartes proposes his two tests as necessary conditions for thinking: Failure to pass these tests, Descartes claims, proves infrahuman animals (and would also prove humanoid robots) utterly devoid of mental properties. Similarly, Harnad interprets the original Turing Test (TT), and proposes his Total Turing Test (TTT), as posing necessary conditions for having any mental properties.

Since taking (passing) either TT or TTT as anything like a necessary condition for thought seems to guarantee that only human beings (and no other animals) really have mental properties, Turing's own understanding of his test is more sensible than Harnad's. Turing denies that (passing) his test (TT) should be considered a necessary condition for having mental properties, and styles it instead (something like) a sufficient condition. Passing this test, he proposes, would be very strong evidence that a computer has mental properties, but failing this test is not very considerable evidence that it hasn't. Thus, Turing notes, "The [machine version of the imitation] game [TT] may perhaps be criticized on the ground that the odds are weighted too heavily against the machine" [Turing (1950), p.435], and he adds, "This objection [that the odds are weighted too heavily against the machine] is a very strong one, but at least we can say that if nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection" [Turing (1950), p.435]. Perhaps, pace Harnad's belief that "Turing's test has nothing to do with `proof'" [Harnad (1989), p.17], it would be fair to say that (passing) Turing's Test is not `proof' in the sense of providing logically sufficient or deductive warrant for attributing mental properties; but it certainly seems that passing his test was supposed, by Turing, to provide at least strong inductive support for such attribution. Harnad says, "If a mechanism fails the Turing Test it need not be assigned a mind" but "if it passes, the success is only provisional" [Harnad (1989), p.17]. It would be more in the spirit of Turing (and sounder) to say, "If a mechanism fails the test it needn't be denied a mind" but "if it passes, such success would be quite presumptive."

So much for the advantages of denying that (passing) Turing's test is anything like a necessary condition for having mental properties. Equally important are the advantages of allowing (inductive) sufficiency to Turing's test (TT) (and a fortiori to Harnad's more stringent Total Turing Test (TTT)) as evidence of mind. This allowance of (inductive) sufficiency to Turing's test forestalls the classic other-minds problem Harnad tries to brazen out, for "what more do we ask if ever we are challenged to say why we believe that other people have minds?" [Harnad (1991), p.51]. It is just such conversational evidence (as TT invokes) and behavioral evidence (as TTT invokes in addition) that we rely on in the case of other people also. Harnad is admirably clear about this, and admirably frank in acknowledging the skeptical consequence -- that there's "no way I can know" about other's minds -- if I discount conversation and behavior as "no evidence" [p.45].

In the present section I have argued that Harnad's insistence on taking (passing) Turing's Test as a necessary condition for justified mental attributions is not only unwarranted by intuitions informing our actual practices of attributing (and refraining from attributing) mental properties, but is even contrary to those intuitions. In particular, it is contrary to intuitions about the mental properties of infrahuman species Harnad himself confesses to having. The next section shows how Harnad's rejection of the (evidential) sufficiency of Turing's Test (TT), and his allegiance to the conjecture that linguistic abilities presuppose sensorimotor abilities, undercut his would-be scientific arguments for adopting the TTT in place of TT.

2. Scientific Considerations: Paradox, Conjecture, and Paradox

2.1 Harnad's Paradoxical Appeal to Science

It is difficult to see how Harnad can reconcile claims of scientific advantages to be had from adopting TTT in place of TT with the contention that TTT, like TT, ultimately provides "no evidence" [p.45] of mind -- "neither certainty (as in mathematics) or even high probability on the available evidence (as in science)" [p.45]. How is TTT supposed to be a scientifically better test of mind than TT unless (passing) TTT is supposed to be better evidence of mind than (passing) TT? And how can no evidence be better?{9} Similarly, it seems impossible to reconcile Harnad's skeptical claims concerning TTT's (lack of) evidentiary force with the claim:

The TTT constrains mind-modelling to the normal scientific degrees of freedom, thereby maximizing the likelihood of converging on the true necessary and sufficient physical conditions for having a mind. [p.44]
I submit that any test that converges on the true necessary and sufficient conditions for a phenomenon provides evidence (conferring high probability, as in science) for the presence of that phenomenon. Conversely, no putative test which fails to provide evidence of a phenomenon (i.e., which fails to confer probability on the claim that the phenomenon is present) has any "likelihood of converging on the true necessary and sufficient ... conditions for" the presence of that phenomenon. Harnad's refusal to allow the (evidential) sufficiency of any form of Turing (or behavioral) testing -- or, equivalently, his acceptance of Searle's distinction between real and mere as-if mental phenomena -- is inconsistent with his attempt to provide scientific reasons for upgrading the TT to the TTT.

As for Harnad's thought that despite being "not really ... `evidence' at all, at least no scientific evidence" [p.46], Turing Tests (e.g., TT and TTT) have a kind of value as "everyday, practical solutions to the `other minds' problem" which "most of us accept and use ... all the time, without even realizing it" [p.49] I make three observations. First (as argued in the preceding section), the tests we actually do use in our "everyday, practical solutions to the `other minds' problem" are less rigorous than TT, not more so. Second, the distinction between scientific tests which give "high probability on the available evidence" and practical tests which don't is not sharp. At best, we might distinguish tests yielding high probability as more scientific than those yielding less probability. But any putative test, I submit, has to yield some (increased) probability (when met) that what it tests for obtains in order to be of any use, scientific or practical. Denial of all evidential force to Turing (behavioral) testing (or, equivalently, acceptance of the doctrine of as-if intentionality) is as incompatible with claims of practical utility on behalf of such tests as with claims of scientific validity on their behalf. Last, as it is characteristic of scientific tests to yield, not deductive certainty, but only "high probability on the available evidence" (and of "merely practical" tests, perhaps, to yield only some probability), it will be no objection to the Turing Test as a practical or even scientific test just to show -- as Harnad credits Searle's Chinese Room Experiment with showing -- that "linguistic communication could in principle (though perhaps not in practice) be no more than mindless symbol manipulation" [p.50, my emphasis]. To invalidate Turing's Test (TT), it needs to show that linguistic communication could be no more than mindless symbol manipulation, not just in principle, but in fact.

2.2 Harnad's Paradoxical Conjecture

Harnad [p.47] suggests that "our linguistic capacity must be ... grounded in our robotic capacity." As Harnad's suggestion seems to be that robotic capacity is causally prerequisite, I take the following to be Harnad's Conjecture:

(HC) The sensorimotor capacities TTT tests for are causally necessary conditions for the linguistic capacities TT tests for.
This conjecture entangles his scientific argument for upgrading TT to TTT in a second paradox. If sensorimotor capacities TTT tests for are causally necessary for having the linguistic capacities TT tests for, then having the linguistic capacities TT tests will be sufficient evidence of the sensorimotor capacities which (on HC) linguistic capacity presupposes. The truth of HC, rather than mandating the robotic upgrade of TT to TTT, would make it unnecessary: TT would suffice, by itself, to test for robotic capacities as well as linguistic capacities.{10} On the other hand, if HC is false, then (passing) TTT will not be a valid test of the linguistic capacities TT tests for. If robotic capacities are not causally necessary for linguistic capacities, then lack of robotic capacities won't suffice to evidence lack of linguistic capacities. If HC is true, then TTT is unnecessary (TT will suffice to test for robotic capacities also). If HC is false, then testing for robotic capacities is not a dependable way of testing for linguistic capacities: as a test of linguistic capacities TTT will be invalid.

What could Harnad be thinking here? I suspect he is thinking something like the following:

If HC were false, if it were possible to confer behavioral (as-if) linguistic capacities (TT passing capacity) on systems independently of (TTT passing) sensorimotor capacities, such systems would still have to be judged (on the basis of Searle's Chinese Room Experiment) to lack truly mental (consciously intentional) linguistic capacities (e.g., understanding). So, TTT might still be a salient test for these truly mental linguistic capacities.
HC, even if it were false as a conjecture about the dependence of behavioral linguistic capacities on sensorimotor capacities, might, on this showing, nonetheless be a true conjecture about the relation of mental linguistic capacities to sensorimotor capacities. As a conjecture about mental linguistic capacities, HC would not entail the uselessness or redundancy of TTT, because TT (by the Chinese Room Experiment) does not suffice to test for mental linguistic capacities at all (just behavioral `as-if' capacities). Harnad's case seems to hang entirely, then, on accepting Searle's notorious "experiment."

3. Searle's Chinese Box

3.1 The Chinese Room Revisited

John Searle proposes this widely discussed thought experiment as a counterexample to Turing's Test: Searle invites us to imagine we are locked in a room, hand-tracing a Turing-Test-passing program for Chinese by following a set of written English instructions of the general sort, "If `squiggle-squiggle' comes in, send `squoggle-squoggle' out" [p.47]. On Harnad's telling,

I could do that without ever understanding what `squiggle-squiggle' or `squoggle-squoggle' meant. In particular, if the symbols were Chinese, I would no more understand Chinese if I were doing the symbol-manipulation than I do now! [Searle understands no Chinese.] So if I would not be doing any understanding under these conditions, neither would the computer whose functions I was duplicating. So much for the Turing Test and the minds of machines. [p.47]
The thought here is that Searle in the room, hand-tracing an imagined Chinese text-processing program capable of producing output (to Chinese input) indistinguishable (to observers outside the room) from a native Chinese speaker's, would pass Turing's Test and not understand Chinese. Thus, it is claimed, Searle in the room is a counterexample to the proposition that if anything passes Turing's Test it understands. Thus, it's claimed, Searle's experiment proves that TT is not a sufficient test of understanding or (more generally) of mind and that we need to distinguish genuine intentionality (really having intentional mental states, as we do) from mere as-if intentionality (merely acting as if possessed of intentional mental states, without really having them -- as with computers and Searle-in-the-room). Turing's Test, it's maintained, only suffices to evidence as-if thinking.

If Searle is right, as Harnad thinks he is, about the vulnerability of Turing's original (teletype) test to this Chinese Room counterexample, and if, as Harnad tries to show, "it turns out that only the teletype version, the TT, is vulnerable to it" [p.47, my emphasis]; if the TTT by "calling for both linguistic and robotic capacity ... is ... immune to Searle's Chinese Room Argument" [p.49]; then this, I suppose, would argue strongly in favor of "turning to the stronger version of the TT that is being proposed [by Harnad], the TTT" [p.49]. But Harnad's TTT enjoys no such special immunity to Searle's "experiment" as Harnad claims -- and none is needed. Neither Turing's Test (TT) nor Harnad's robotic upgrade of it (TTT) are vulnerable to Searle's argument.

3.2 Harnad's "Robot Reply"

According to Harnad, "Searle's argument worked because Searle himself could do everything the machine did -- he could be the whole system -- and yet still be obviously failing to understand" [p.50]. But now, Harnad asks us to consider

the TTT case of seeing [where] the two possibilities would again be whether the machine really saw objects or simply acted exactly as if it did. But now try to run Searle's argument through. Searle's burden is that he must perform all the internal activities of the machine -- he must be the system -- but without displaying the critical mental function in question (here, seeing; in the old test, understanding). Now machines that behave as if they see must have sensors -- devices that e patterns of light on their surfaces and turn that energy into some other form (perhaps other forms of energy, perhaps symbols). So Searle seems to have two choices. Either he only gets the output of those sensors (say, symbols), in which case he is not doing everything that the candidate device is doing internally (and so no wonder he is not seeing -- here the "Systems Reply" would be perfectly correct); or he looks directly at the objects that project onto the device's sensors (i.e., he is being the device's sensors) -- but then he would in fact be seeing! [p.50].
Harnad's argument is ingenious, but neglects a crucial third alternative. When "he looks directly at the objects that project onto the device's sensors (i.e., he is being the device's sensors)" Searle in the visual room might not (consciously) be seeing but blind seeing.{11} Furthermore, blindsight is precisely the relevant alternative! To function as the machine's visual sensor Searle in the room is only required to "transduce patterns of light on [his retina's] surfaces and turn that energy into some other form (perhaps other forms of energy, perhaps symbols)" [my emphasis]. He is not required to consciously transduce them. The supposition of consciousness here -- that "he would in fact be [consciously] seeing" -- is gratuitous. Blindsight (unconscious visual transduction) is all that's required for Searle in the visual room to "perform all the internal activities of the machine." Harnad seems to have two choices. Either he allows that blindsight really is an unconscious kind of seeing (not just as-if seeing), in which case we might also allow that Searle-in-the-Chinese-Room is "blind understanding"; or he denies that blindsight is real seeing (requiring that "the critical mental function in question" be conscious to be genuinely mental), in which case blindsighted Searle in the visual room, no less than Searle-in-the-Chinese-Room, will "perform all the internal activities of the machine ... without displaying the critical mental function." [p.50] Either way, there is parity between the two cases, and the TTT enjoys no special immunity to Searle's argument. If the critical mental function in question is not required to be conscious (as I advocate), then TT and TTT are both immune to Searle's example. If the critical mental function in question is required to be conscious (as Harnad advocates), then both TT and TTT are vulnerable to Searle's example.

3.3 Stuck Inside the Chinese Room with Descartes's Blues Again

Harnad sees that the Chinese Room Experiment is not essentially about intentionality (as it seems), but about consciousness of intentionality:

if there weren't something it was like [i.e., consciousness, or subjective experience] to be in a state that is about something [i.e., an intentional state] ... then the difference between "real" and "as-if" intentionality would vanish completely. So [the problem of "intentionality," which is whatever it is that makes mental states be "about" things in the world] seems to be completely parasitic on [the problem of consciousness (or what it's like to be in a mental state)]. [p.53, note 3]
The crucial Cartesian slight of hand, as always, occurs when Searle invites us to adopt "the first-person point of view" [Searle (1980b), p.451]. It is only from the point of view of the agent, from the imagined point of view of Searle in the room, that Searle-in-the-room seems to be "obviously failing to understand" [p.50]. From the point of view of an external observer, outside the room, of course, it seems perfectly obvious that someone in the room does understand. It is not just that we "have an alternative source of direct information: our own private experience" [p.52, my italics]. What needs to be accepted for the Chinese Room Experiment to provide a compelling counterexample to Turing is that private experience is the only direct source of information we have about our mental properties and is privileged as such to override all indirect (i.e., all public and behavioral) evidence to the contrary. Without the neo-Cartesian assumption that each person has privileged introspective access to their own mental states and processes -- to their (lack of) understanding, in particular -- it will be reasonable to describe Searle-in-the-room as unconsciously understanding Chinese and accept unconscious understanding as genuinely mental (not just as-if) understanding rather than accept the consequence Harnad admits, "that there's no way I can know" whether anyone besides myself really understands (not just as-if understands) Chinese, or English. It will be reasonable to describe Searle-in-the-room as unconsciously understanding Chinese rather than accept the consequence (more generally) that "there's no way I can know" whether anyone else really has (not just as-if has) whatever mental properties their behavior seems to bespeak.

What makes us think Searle in the room does not understand Chinese, besides his (imagined) disavowal of understanding? Well, the situation Searle describes is not unlike that of a native English-speaking tourist relying on a Chinese-English phrase book, whom we should characteristically not credit with understanding Chinese. Thus, a consideration motivating the systems reply -- that it is not Searle who understands Chinese but the system of Searle plus the written instructions (Searle-in-the-room) -- is that if we take away the written instructions, Searle will no longer be able to process the Chinese input. Similarly, we should be hesitant to attribute any understanding of Chinese to our English-speaking tourist so long as she is completely dependent on the phrase book. But if she's memorized the phrase book, it seems we would credit her with some understanding of Chinese. It seems the crucial case with regard to evaluating Searle's would-be counterexample to Turing is the case Searle offers as refutation of the systems reply, where Searle is supposed to "memorize all the symbols and symbol manipulation rules" [p.48], yet still he "understands nothing of Chinese" [Searle (1980a), p.419]. In this scenario, it seems our only basis for denying that Searle understands is that (we are supposed to imagine) he would "tell us that he is understanding no Chinese" [p.48]. Here, I maintain, not only needn't we credit his (imagined) disavowal as proof positive that he doesn't understand (given the general unreliability of introspection and the possibility of "blind understanding" already mentioned); we needn't even credit the supposition that this is what Searle in the room would report if he were somehow able to memorize all the symbols and manipulation rules.{12}

Suppose we take seriously the idea that Searle-in-the-room is passing Turing's Test in Chinese -- conversing (or corresponding) with native Chinese speakers, in real time, with such fluency that these other Chinese speakers conclude he understands. This would be a remarkable feat, even supposing the real-time constraints were not the more stringent conversational ones but the more lenient ones constraining correspondence. Now we are asked to imagine that Searle (even more remarkably) memorizes the rules and symbols and fluently converses (or corresponds) with his Chinese interlocutors by consciously applying the memorized rules. I, for one, am by no means confident that I can say what private experiences or subjective impressions of understanding would or wouldn't accompany such a remarkable performance.{13} I, for one, am perfectly able (even inclined) to imagine that someone capable of conversing by this procedure would know they were conversing in a language, feel they understood, and tell us they understood this language.{14} The Chinese Room Argument not only requires us to grant overriding privileges to actual introspective judgements of (lack of) understanding or disavowals of understanding (which is questionable enough); Searle's "experiment" requires us to grant overriding privileges to introspective judgements and disavowals we imagine we would make in circumstances we can hardly imagine.

Harnad rightly observes that Searle's Chinese Room Experiment proves (at most, given the considerations above) that "linguistic communication could in principle (though perhaps not in practice) be no more than mindless symbol manipulation" [p.50, my italics] -- it shows, at most, that it's logically possible to pass Turing's Test for understanding Chinese (e.g.) without really understanding. There is no practical (nor even, I think, nomological) possibility of a person passing Turing's Test for understanding Chinese by hand-tracing a Chinese natural-language-understanding program in real conversational (or even epistolary) time. Of course the mere logical possibility of a counterexample goes to refute (passing) Turing's Test as a definition, providing logically sufficient grounds for attributing thought. But, of course, the mere logical possibility of a counterexample has absolutely no tendency to invalidate a proposed empirical test, such as Turing's. So much for Searle's `experiment' and the alleged as-if-ness of the apparent mental properties of machines.

4. The Other-Minds Problem

By insisting on distinguishing real from "as if" intentionality on the basis of "subjective experience" [p.52] that "I can never check whether anyone else has" [p.45], Harnad accepts Searle's invitation to "regress to the Cartesian vantage point" [Dennett (1987), p.336]. Not surprisingly, Harnad inherits the anomalies of that vantage point -- most notably, he gets driven to skepticism about our knowledge of other minds. He says,

I either accept the dictates of my intuition (which is the equivalent of "If it looks like a duck, walks like a duck, quacks like a duck ... it's a duck") or I admit that there's no way I can know. [Harnad (1991), p.45]
Since Harnad, like Searle (like Descartes) supposes "the fact of subjectivity itself" [p.52] is a further fact of the matter about mind beyond "Turing indistinguishable bodily performance" [p.51], a fact of the matter that "only the candidate itself can know" [p.52], Harnad cannot accept the duck principle. He has to admit "there's no way I can know"!

Searle, like Harnad, rejects the "duck" principle and insists something could behave exactly as if it had some mental property (understood Chinese, say) and not have it. It is essential to his Chinese Room Experiment that something (e.g., a computer, or Searle-in-the-Chinese-Room) can behave in a way that "from the external point of view" is absolutely "indistinguishable from ... native Chinese speakers," yet "not understand a word of ... Chinese" [Searle (1980a), p.418]. What warrants this conclusion in the face of overwhelming behavioral evidence, overwhelming appearances "from the external point of view" to the contrary, Searle maintains, is that "from the point of view of the agent, from my point of view" [Searle (1980a), p.420] "it seems quite obvious to me ... that I do not understand" [Searle (1980a), p.418]. Like Harnad, Searle grants overriding epistemic privilege -- the privilege of overriding all public behavioral evidence to the contrary -- to how it seems to me from my point of view, or what mental properties I introspectively judge (or sincerely avow) myself to have. Unlike Harnad, however, Searle [Searle (1980a), pp.421-422] refuses to acknowledge that any skeptical difficulties about other minds arise from such a grant of overriding epistemic privilege to how it seems to me from the "first-person point of view". What, if anything, warrants Searle's cavalier dismissal of the "Other Minds Reply"?

Absolutely nothing: Searle's dismissal of the Other Minds Reply is unwarranted. Where Harnad bites the bullet (accepts epistemological solipsism), Searle merely ducks the issue. It is certainly not enough, for instance, to say, as Searle does in this connection, "The epistemic, methodological questions are relatively uninteresting because they always have the same answer: Use your ingenuity. Use any weapon at hand, and stick with any weapon that works" [Searle (1990b), p.640]. It isn't enough for the reason Harnad recognizes: to reject the "duck principle", to deny supposedly inner states of mind any reliable outward symptoms or criteria, is precisely to reject the only "weapon at hand" for warranting judgments concerning the mental properties of others.

It has been urged in Searle's defense{15} that, "Searle seems to be untroubled because he seems to reason that other humans have brains, which we know support mental states (as in my own case)." The idea is that since I know that I have mental states (from my own experience of them), and that other human beings are like me in relevant respects (i.e., they have brains) I have inductive warrant (by analogy with my own case) for believing other humans have mental states too. This Argument from Analogy, familiar to philosophers, is a standard Cartesian reply to the Other Minds problem, subject to familiar objections.{16} For starters, it involves inductive extrapolation from a single case (my own). If such an argument provides any inductive warrant for my belief that other people really have the mental properties their behavior seems to bespeak, it must be an exceedingly weak one: too weak a warrant, it seems, to justify the robust confidence (and justification) we have in attributing minds to others. Then there's difficulty about how the causally relevant feature (just having a brain?) is supposed to be established by my own case. Even supposing I know having a (human) brain is necessary for mental states in my own case, how do I know it's sufficient? (Note that Searle's Chinese Room Experiment itself presumes that a having a human brain is no bar to someone -- e.g., Searle in the room -- behaving exactly as-if possessed of mental properties (e.g., understanding) they don't actually possess!) The basic point is that the solipsistic predicament pertains to individuals, not species [Leiber (1985)].{17} If Descartes is universally acknowledged to have other-minds troubles despite his allowance of the causal necessity of mental states for linguistic performance (hence the inductive sufficiency of linguistic performance to evidence these mental states), Searle has other-minds problems in spades. To acknowledge, with Searle, the possibility of "as-if intentionality" -- to deny the causal necessity of mind or understanding for Turing-Test-passing behavior (even Total-Turing-Test-passing behavior) -- entails the insufficiency of TT (or any behavioral test) as evidence of understanding or mind. But, since behavioral evidence such as TT and TTT invoke provides "our only basis for judging other people have minds" [p.46], this verdict of insufficiency does seem to have the further consequence Harnad sees. "There is, in fact, no evidence for me [sufficient to warrant my belief] that anyone else but me has a mind," "there's no way I can know" [p.45]. It is not a solution to this problem merely to feign amnesia [Searle (1980a), p.422].

Whirlwind Conclusion

Whether or not Harnad and Searle are dualists of some sort, their insistence on "consciousness (or what it's like to be in a mental state ...)" as "the difference between `real' and `as-if' intentionality" [Harnad (1991), n.3] is clearly Cartesian. So long as they insist on this, whether they call themselves dualists or admit to being Cartesians or not,{18} they inherit other-minds problems and get driven toward epistemological solipsism. Regression to the Cartesian vantage point resurrects its anomalies: in particular, other-minds problems concerning both other human minds and the minds of infrahuman animals. Nothing Harnad says, or Searle says, indicates that these problems are any more tractable for them than for any of their (very long line of) Cartesian predecessors. Indeed, Harnad's and Searle's nonresponsive "replies" to Cartesian other-minds troubles -- Harnad's willingness to "bite the bullet" and Searle's insistence on stonewalling -- strongly indicate that the Cartesian anomalies they inherit remain, on such neo-Cartesian views as Harnad's and Searle's, just as anomalous as ever.

As for the "symbol grounding problem" that Searle's Chinese Room Experiment is supposed to pose for behaviorist or functionalist (e.g., computational) accounts of the mental -- the problem that "syntax alone is not sufficient for semantics" [Searle (1984a), p.34] -- this seems a problem for any account of the mental, not just the behaviorist, functionalist, and computational accounts Searle's argument targets. It is not clear, for instance, how, for all it costs, invocation of "consciousness," "the first-person viewpoint," "subjective experience," and like Cartesian paraphernalia helps with the symbol grounding problem at all. It even seems clear that it doesn't: private conscious experience (plus syntax) is not sufficient for semantics either!{19}

Given the severity of its side effect -- "there's no way I can know" "that anyone else but me really has a mind" [p.45]! -- and its lack of any symbol-grounding benefits, Searle's regression to the Cartesian vantage point is strongly contraindicated. Insofar as Harnad's case against TT depends on acceptance of Searle's would-be counterexample, and the force of Searle's example depends on a neo-Cartesian grant of epistemic privilege to how it seems to me "from the point of view of the agent, from my [first-person] point of view" [Searle (1980), p.420], Harnad's proposed "robotic upgrade of the TT to the TTT" [p.50] is unwarranted.

I welcome your Comments.

Back to: Selected Papers; Home Page; Curriculum Vitae of Larry Hauser.

Related Discussion: Minds and Machines: Behaviorism, Dualism, and Beyond, James Fetzer.

Notes

1. Harnad, Steven (1991). `Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem.' Minds and Machines 1: 43-54. Unless otherwise indicated, page references are to this work. The present version of this paper owes much to the advice and criticism of two anonymous reviewers for Mind and Machines.^

2. By "neo-Cartesian" I refer to the views of such contemporary authors as Nagel [1974, 1986] and Searle [see especially, Searle 1984a]: the class of views Paul Churchland calls "property dualism". While not explicitly positing immaterial mental substances as the bearers of mental properties, as classic Cartesian dualism (which Churchland calls "substance dualism") does, neo-Cartesians continue to regard mental properties (e.g., mental states and processes) as being, essentially, private subjective states of conscious awareness.^

3. "...but they cut this out of the broadcast because (as Braithwaite said) one could hardly call that thinking." [Hodges (1983), p.452]^

4. "Some of us (not this writer) are skeptical about animals' minds" [Harnad (1989), p.17].^

5. So, dogs don't really smell rabbits -- it's just "as-if" smelling. Eagles don't really see rabbits -- it's just "as-if" seeing. Searle's distinction between real conscious thinking (e.g., ours) and mere "as-if" thinking (e.g., computers') echoes Descartes's distinction between real human and "as-if" infrahuman thinking.^

6. As for the suggestion (of an anonymous reviewer) that my objection to Harnad's test (TTT), (and his interpretation of Turing's) as a necessary condition of thought, "might be met by considering the tests as having only to do with the possession of human-type mentality," I submit that the issue here is whether computers have any sort of mentality -- which Searle denies. I take it Harnad would deny this also. Harnad, I take it, would not be content to conclude that just as cats have real mentality (cat-type mentality) despite lacking "human-type mentality", so do computers have real mentality (computer-type mentality) despite lacking "human-type mentality". He wants to say, I take it, that cat-type mentality, like human-type mentality, is real mentality; and he wants to deny that computer-type mentality is real (it's just "as-if" mentality). My point is that if you take the inability of computers to pass Turing's Test or Harnad's Total Turing Test as showing that their apparent mental capacities aren't real, then the same conclusion must be drawn about the apparent mental capacities of cats. Conversely, if the inability of cats to pass Turing's Test or Harnad's Total Turing Test is not to be taken to show that the apparent mental capacities of cats aren't real (just nonhuman), neither should the inability of computers to pass these tests be taken to show that the apparent mental abilities of computers aren't real (albeit nonhuman).^

7. Limitations of space preclude a detailed reply [Hauser (1991a), (1991b), provide more extended consideration] to those, such as the anonymous reviewer who would insist -- too easily, I think -- that "such ordinary attributions [are] mere metaphor or manner of speech" and thus fail to evidence the real presence of mental properties in the machines to which we make them. The short reply to this is, since "an ordinary application of Occam's Razor places the onus of proof on those who wish to claim that these sentences are ambiguous" [Searle (1975), p.40], such attributions provide at least prima facie warrant for the contention that machines to which we attribute such properties really have them. Moreover, we intuitively sense that we speak figuratively when we predicate mental terms of computers only in some (but by no means in all, or even most) instances. If I say "DOS erased my files because it hates me" that's figurative. When I say, "DOS recognizes the dir command," that's different!^

8. This has also been noted by Justin Leiber [(1991)], as was brought to my attention by an anonymous reviewer. See also, Gunderson [(1985)].^

9. Unless passing TT is taken to be infirming evidence of mental capacity! Surely, neither Harnad nor anyone else holds that.^

10. Descartes shares Harnad's convictions about the inseparability of linguistic and total behavioral capacities (both require and hence evidence, for Descartes, the same universal instrumentality of reason). Consequently, though Descartes "distinguishes two tests at the outset, he actually relies more on one test -- the language test -- which, if not passed by a certain subject S ... would entail S's failure to pass the action test." [Gunderson (1985), p.10]^

11. Blindsight is a phenomenon associated with certain sorts of brain lesions, which has been studied and discussed by Weiskrantz [(1986)] and others. Blindsight is marked by "a striking dissociation between discriminative capacity and acknowledged awareness" [Weiskrantz (1986), p.173]. As Searle describes the phenomenon, a blindsighted individual "can give correct answers to questions about visual events and objects that he is presented with, but he claims to have no visual awareness of these objects and events." [Searle (1983), p.47]^

12. David Cole [(1984)] similarly urges that scant weight should be accorded Searle-in-the room's denials of understanding. (This was called to my attention by an anonymous reviewer.)^

13. I think I am probably less able to imagine what it would be like to be such a remarkable superhuman creature as this than I am able to imagine what it's like to be a bat [Nagel (1974)]. The mental endowments of such a superhuman creature would seem at least as unlike mine as a bat's are.^

14. I needn't know it's Chinese I'm conversing in and understanding -- just that I understand this language (whatever it is) that I'm conversing in.^

15. By an anonymous reviewer of an earlier version of this paper.^

16. See, for instance, Paul Churchland [(1988), pp.68-70] and Jerry Fodor [(1968), p.131].^

17. "We can't write solipsism species-wide. We can't really argue that we humans have a peculiarly intimate way of knowing that all of us think and feel, while requiring with respect to ... possible extraterrestrial species, or [computers], or chimpanzees, et al., some additional and different (and maybe conveniently impossible) form of demonstration that they think and feel." [Leiber (1985), pp.60-61]^

18. Searle explicitly disavows dualism [(1980b), p.454; (1982), p.57)], yet speaks of "ontological subjectivity" [(1989), p.194]; he continues to "always insist on the first-person point of view" [(1980b), p.451], yet explicitly disavows "Cartesian paraphernalia" [Searle (1987), p.146]; and he accords lack of introspective awareness of understanding, or first-person disavowals of understanding, the privilege of overriding all behavioral evidence in his Chinese-Room thought experiment, yet says, "I assign no epistemic privilege to our knowledge of our own conscious states" [(1990b), p.635]. I call it the Richard Nixon reply: "I am not a dualist." (Just an as-if dualist.)^

19. Wittgenstein's discussion of rule-following and "private language" [(1956), Sects. 131-299] amply demonstrates the futility of trying to ground public meanings in private experiences. This is well brought out in Kripke's [(1982)] discussion of Wittgenstein. Boghossian [(1989)] provides a lucid overview that shows the force of the considerations Wittgenstein and Kripke advance against the possibility of grounding meaning in subjective phenomenological states. Ruth Millikan [(1984), pp.89f] makes a similar point (this was called to my attention by an anonymous reviewer) on the basis of an extended argument that is independent of Wittgenstein's.^

References

Alcock, J.E. (1987), `Parapsychology: Science of the Anomalous or Search for the Soul?', Behavioral and Brain Sciences 10, pp.553-643.

Austin, J. L. (1957), `A Plea for Excuses', in R. Ammerman, ed., Classics of Analytic Philosophy, Indianapolis: Hackett Publishing Co., pp.379-398.

Boghossian, P. A. (1989), `Rule Following Considerations', Mind 98, pp.507-549.

Churchland, Paul (1988), Matter and Consciousness, Cambridge, MA: MIT/Bradford Books.

Cole, David (1984), `Thought and Thought Experiments', Philosophical Studies 45, pp.431-444.

Dennett, D. C. (1982), `The Myth of the Computer: An Exchange', New York Review of Books XXIX(11), p.56.

Dennett, D. C. (1987), `Fast Thinking', The Intentional Stance, Cambridge, MA: MIT/Bradford Books, pp.323-337.

Descartes, R. (1637), Discourse on Method, translated in J. Cottingham, R. Stoothoff, and D. Murdoch eds., Philosophical Writings of Descartes Vol.I, New York: Cambridge University Press (1985), pp.7-78.

Descartes, R. (1642), Objections and Replies, translated in J. Cottingham, R. Stoothoff, and D. Murdoch eds., Philosophical Writings of Descartes Vol.II, New York: Cambridge University Press (1984), pp.63-397.

Dretske, F. (1985), `Machines and the Mental', Proceedings and Addresses of the American Philosophical Association 59, pp.23-33.

Fodor, Jerry A. (1968), `Materialism', in David M. Rosenthal, ed., Materialism and the Mind-Body Problem, Indianapolis: Hackett Publishing Company, pp.128-149.

Gunderson, K. (1985), Mentality and Machines, Minneapolis: University of Minnesota Press.

Harnad, S. E. (1989), `Minds, Machines and Searle', Journal of Experimental and Theoretical Artificial Intelligence 1, pp.5-25.

Harnad, S. E. (1991), `Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem', Mind and Machines 1(1), pp.43-54.

Hauser, L. S. (1991a), `Why Isn't my Pocket Calculator a Thinking Thing?', in L. Hauser and W. J. Rapaport (1991), `Why Isn't my Pocket Calculator a Thinking Thing? Essay, Comments, and Reply', Technical Report 91-20, Buffalo: SUNY Buffalo Department of Comuputer Science; forthcoming in Minds and Machines.

Hauser, L. S. (1991b), `The Sense of "Thinking": Reply to Rapaport', in L. Hauser and W. J. Rapaport (1991), `Why Isn't my Pocket Calculator a Thinking Thing? Essay, Comments, and Reply', Technical Report 91-20, Buffalo: SUNY Buffalo Department of Comuputer Science; forthcoming in Minds and Machines.

Hodges, A. (1983), Alan Turing: The Enigma of Intelligence, New York: Touch-stone/Simon & Schuster.

Kripke, S. (1982), Wittgenstein on Rules and Private Language, Cambridge, MA: Harvard University Press.

Leiber, Justin (1991), Invitation to Cognitive Science, Cambridge, MA: Basil Blackwell.

Leiber, Justin (1985), Can Animals and Machines be Persons?: A Dialogue, Indianapolis, IN: Hackett Publishing Company.

Millikan, Ruth (1984), Language, Thought and Other Biological Categories, Cambridge MA: MIT Press.

Nagel, T. (1974), `What Is It Like to Be a Bat?', Philosophical Review 83, pp.435-451.

Nagel, T. (1986), The View from Nowhere, New York: Oxford University Press.

Searle, J. R. (1975), `Indirect Speech Acts', Expression and Meaning, Cambridge, Eng.: Cambridge University Press, pp.30-57.

Searle, J. R. (1980a), `Minds, Brains and Programs', Behavioral and Brain Sciences 3, pp.417-424.

Searle, J. R. (1980b), `Intrinsic Intentionality', Behavioral and Brain Sciences 3, pp.450-456.

Searle, J. R. (1982), `The Myth of the Computer: An Exchange', New York Review of Books XXIX(11), pp.56-57.

Searle, J. R. (1983), Intentionality: An Essay in the Philosophy of Mind, New York: Cambridge University Press.

Searle, J. R. (1984a), Minds, Brains and Science, Cambridge, MA: Harvard University Press.

Searle, J. R. (1984b), `Intentionality and its Place in Nature', Synthese 61, pp.3-16.

Searle, J. R. (1987), `Indeterminacy, Empiricism and the First Person', The Journal of Philosophy LXXXIV(3), pp.123-146.

Searle J. R. (1989), `Consciousness, Unconsciousness and Intentionality', Philosophical Topics XVII(1), pp.193-209.

Searle, J. R. (1990a), `Consciousness, Explanatory Inversion and Cognitive Science', Behavioral and Brain Sciences 13, pp.585-596.

Searle, J. R. (1990b), `Who is Computing with the Brain', Behavioral and Brain Sciences 13, pp.632-640.

Turing, A. M. (1950), `Computing Machinery and Intelligence', Mind LIX, pp.433-460.

Weiskrantz, L. (1986), Blindsight: A Case Study and Implications, Oxford: Oxford Univesity Press.

Wittgenstein, L. (1958), Philosophical Investigations, translated by G. E. M. Anscombe, Oxford: Basil Blackwell Ltd.