FODOR, FUNCTIONS, PHYSICS, AND FANTASYLAND:
IS AI A MICKEY MOUSE DISCIPLINE?

Christopher D. Green
Department of Psychology
York University
North York, Ontario M3J 1P3
CANADA

e-mail: christo@yorku.ca

(1996) Journal of Experimental and Theoretical Artificial Intelligence, 8, 95-106.


Abstract

It is widely held that the methods of AI are the appropriate methods for cognitive science. Fodor, however, has argued that AI bears the same relation to psychology as Disneyland does to physics. This claim is examined in light of the widespread but paradoxical acceptance of the Turing Test--a behavioral criterion of intelligence--among advocates of cognitivism. It is argued that, given the recalcitrance of certain deep conceptual problems in psychology, and disagreements concerning psychology's basic vocabulary, it is unlikely that AI will prove to be very psychologically enlightening until after some consensus on ontological issues in psychology is achieved.

1. Introduction

It is widely held that the methods of artificial intelligence (AI) constitute a legitimate--even the preferred--manner of conducting research in cognitive science. This belief is due, in large part, to the primacy of computational functionalism (CF), currently the most influential framework for cognitive science. Functionalism--whether of the computational variety or not--holds that mental states are abstract functions that get one from a given input (e.g., sensation, thought) to a given output (e.g., thought, behavior). Thus, under CF, a mental state is thought to be nothing more than a functional role between certain inputs and outputs, or between these and other functional roles, similarly defined. In this bald state, however, functionalism is not an adequate account of anything. As Fodor (1981a) has pointed out, if the only requirement were the mapping of inputs on to outputs there would be no explanatory value in functionalism because the task could be trivially accomplished. For instance, if I wanted to explain how it is that people can answer questions on a wide array of topics, I could simply postulate the existence of a "question-answering function" which just maps all questions on to their answers. Such an explanation would, of course, be completely vacuous. In order to make functionalism function, so to speak, one must put constraints on the sorts of functions that are considered to be reasonable psychological theories. The computational answer to this question is to allow only those functions that can be implemented on Turing machines. A Turing machine is not a machine at all in the usual sense of the term but, rather, an idealized model of a machine that can read, write, and erase a finite number of symbol types situated in a specified order on a (potentially infinite) length of tape. The machine "decides" which parts of the tape are to be read, written on, or erased according to rules given in a Turing machine table. Such tables are, in effect, (very) low level computer programs. By restricting one's functions to those implementable on a Turing machine, one rules out the positing of "question-answering functions", and the like, unless one can specify a machine table that would enable a Turing machine to actually accomplish the task at hand. Of course, one could propose other sorts of functionalisms (logical behaviorism, for instance, was just such an alternative sort of functionalism; see Fodor, 1981b, pp. 9-10) but the computer model seems to have the upper hand these days, partly because it lends itself to realism about mental states, whereas behaviorism encourages mental instrumentalism, or even eliminativsm.

It is often argued that if one is a committed CF-ist, for each cognitive process one wants to investigate, one must develop a computer program (or at least show how one might, in principle, be developed) that can replicate the mental process in question. By hitching one's functionalism to one's computer one keeps the bogeyman of vacuous circularity at bay. Because such programs are simply instances of (or, at least, attempts at) AI, AI becomes the obvious method of choice for researchers in cognitive science.

Among advocates of computational functionalism, Jerry Fodor is among the best-known and most influential. In a recent paper, however, he rejected AI as a creditable method of cognitive-scientific research. The context was one in which Dennett (1991a, p. 91) accused Fodor of suggesting the whole enterprise of "Good Old Fashioned AI" (GOFAI) to be ill-founded. Such a suggestion, Dennett argued, is inconsistent with the main tenets of CF. If so, it would be incoherent of Fodor to be pro-CF and anti-AI. In response Fodor wrote:

...in a sense I do believe that the "whole enterprise of GOFAI is ill-founded". Not because it's got the wrong picture of the mind, however, but because it has a bad methodology for turning that picture into science. I don't think you do the science of complex phenomena by attempting to model gross observable variance. Physics, for example, is not the attempt to construct a machine that would be indistinguishable from the real world for the length of a conversation. We do not think of Disneyland as a major scientific achievement. (Fodor, 1991, p. 279)

It is the main aim of this paper to examine Fodor's claim against AI, in light of his advocacy of CF. I will argue that there are, indeed, several interesting parallels of the sort Fodor suggests, especially when AI is looked at in terms of its most widely celebrated criterion for success, the Turing test. More importantly, whereas there has long been consensus on what physics is supposed to explain, and what entities are to be included in those explanations, there is little such consensus in psychology and, thus, a successful cognitive Disneyland would probably be even less useful to psychology than a physical Disneyland would be to physics.

To outline the argument a little more fully, after dispensing with some likely misinterpretations of Fodor's central claim, I try to explicate the Disneyland analogy, showing that the question of how "realistic" a given Disneyland is depends, in part, on whether its evaluators conform with certain a priori criteria of "normal" action. I then show that precisely analogous issues come into play when considering the validity of the Turing Test, thereby lending credence to Fodor's analogy. Next, I argue that the very ability to distinguish between Disneyland and the real world depends to a significant degree on one's prior scientific understanding of worldly phenomena. Since, on any reasonably realistic reckoning, we do not have a very good idea of what the determinants of real psychological states and processes are, it follows that cognitive scientists may well be in a significantly worse position vis-a-vis distinguishing the real from the ersatz than is a physicist investigating Disneyland. I go on to suggest that, in the absence of such knowledge, AI seems to have implicitly turned to a strategy reminiscent of one that has been used by those 20th century semanticists wanting to rid themselves of the troublesome problem of reference; a strategy of especially dubious value in an empirical setting such as cognitive science. Finally, I suggest that the diverse ontological commitments of cognitive scientists make it very unlikely that the outcome of any AI research, as it is currently conducted, will prove compelling enough to lead to a reasonable consensus about the nature of mental states or processes, and that work of the type practiced by Fodor is, at this point in cognitive science's history, far more likely to be significantly fruitful.

2. What Not to Think about Fodor

The first task is to dispel some likely misinterpretations of the Disneyland quotation above. One might think that because Fodor specifically cites GOFAI, that he is exempting connectionist AI from his argument. Nothing could be further from the truth. Fodor is a fierce detractor of connectionist models of mind. With Zenon Pylyshyn (1988) he has argued that connectionist models do not--or, more precisely, are not inherently constrained to--exhibit the productivity, generativity, and systematicity characteristic of many mental activities, most notably language. In fact, with regard to the ongoing debate with his primary connectionist adversary, Paul Smolensky, Fodor says,

As far as I can tell, the argument has gone like this: Fodor and Pylyshyn claimed that you can't produce a connectionist theory of systematicity. Smolensky then replied by not producing a connectionist theory of systematicity. Who could have foreseen so cunning a rejoinder? (Fodor, 1991, p. 279)

Thus, it cannot be claimed that Fodor's critique of GOFAI was intended to let connectionism off the hook. It was a response to a question specifically about GOFAI.

A second possible misconstrual is that Fodor is rejecting the "strong" view of computational functionalism. According to John Searle (1980, 1984), the thesis of "strong" AI is that symbol manipulation of a syntactic sort just is a cognitive process. Searle is pretty clear about who he has in mind when he discusses "strong" AI. He lists the giants of the field: Allen ("we have discovered that intelligence is just symbol manipulation") Newell, Herbert ("we already have machines that can literally think") Simon, Marvin ("we'll be lucky if the next generation of computers keep us around as house pets") Minsky, and John ("thermostats have beliefs") McCarthy (all quotations cited in Searle, 1984, pp. 29-30). In personal communications, Searle has said that early Putnam, as well as Fodor and Pylyshyn, are to be included in the "strong" AI group as well. By contrast, under the "weak" view of AI, programs are just simulations of cognitive processes, rather than actual instantiations of them. They bear exactly the same relation to psychology that computational simulations of weather systems bear to meteorology. No one for a moment believes them to be actual weather systems.

There is an interesting terminological ambiguity at work here that is crucial to the present issue. Searle speaks of "strong" and "weak" AI and puts Fodor in the "strong" camp. Fodor, however, rejects AI, but subscribes to what might be called "strong" CF. According to "strong" CF--at least according to Fodor--the "right kind" of computational symbol manipulation is thought to actually instantiate a cognitive process, but it is doubtful that Fodor would be so liberal as to attribute true cognitive capacity to just any such computation (as, at least, Simon and McCarthy seem to). What counts as the "right kind" of computation, however, is never made very clear, or at least has not been fully worked out as yet (though Pylyshyn's (1984) notion of "functional equivalence" seems to be generally accepted, if not entirely clear). Thus, there is nothing in the "Disneyland" quotation given above to indicate that Fodor has had a crucial change of heart on the question of the relation between computation and cognition. He is still a full-fledged computationalist. He just doubts that the tools of program design will settle the important questions that cognitive science is faced with (e.g., How are the rules and results of cognition represented? What are intentionality and rationality? What, if anything, are consciousness and qualia?).

A related misunderstanding to be avoided is that it is being claimed that computers have no part in psychological theorizing at all. What we have here is really a debate among advocates of CF: Simon, Newell, Minsky, McCarthy, and others, on the one hand, advocating "strong" AI as the best way of advancing CF; Fodor, on the other hand, rejecting "strong" AI as a productive strategy for advancing CF. Many would say that even if CF is entirely wrong, there is a place for computational work in psychology. Formalizing psychological theories in the form of computer programs assists us in evading certain problems that have been nearly endemic in the psychological theorizing of the past. Primarily, computational formalization forces the theorist to be utterly explicit about exactly what is being claimed by each premise of the theory. In the past, it has often been the case that psychological theories relied so heavily on pre-theoretic psychological intuitions, that it has turned out that theorists were implicitly assuming the very things they were attempting to explain. The requirement of computational formalization of theories compels us to be less sanguine about this possibility, and catches many instances of it before a given theory even makes it to press.

While this is all true, I think it bears little on the present debate. As mentioned above, if "weak" AI is all that is claimed, then the claim doesn't carry much psychological "punch". Under "weak" AI, the computer is no more intrinsic to psychological theory than it is to meteorological theory; it is just a tool, of no more special importance to the theory than the pencil with which one writes one's theoretical notes (though with the useful capacity, noted just above, that pencils lack). The question at hand, however, concerns whether, among those who believe the computer to be essential to psychological theory, one should choose to pursue the goal via computer programming, or via conceptual analysis (both, of course, including liberal sprinklings of empirical fact along the way). This is a separate matter entirely, and the one on which this paper is focused.

3. Is AI Disneyland?

With this detritus cleared out of the way, the primary question that remains is whether or not, as Fodor claims, AI bears the same relation to psychology as Disneyland does to physics.

3.1. What is Disneyland?

At Disneyland, roughly speaking, various mechanical contrivances are put in place out of view that cause the apparent ground, the apparent river, the apparent plant, the apparent animal, the apparent person, etc. to function as one would expect the real McCoy to. Two considerations that might be called Disney Principles are of particular note here. First, there are no real grounds, rivers, plants, animals, or people involved (at least in the parts with which we are here concerned). Second, for the most part, none of them are very convincing, except to the very young. So we have to imagine a sort of Ideal Disneyland--perhaps something more akin to Westworld, of cinematic fame--in which the features of the ground, rivers, animals, people, etc. are indistinguishable from the real thing.

Here we come to the first difficulty. Indistinguishability may seem to be a pretty straightforward concept, but it is not. What counts as indistinguishable necessarily depends on an implicit set of constraints about what sorts of investigations one is allowed to pursue. There is no "pure" indistinguishability, as it were. If one digs (deep enough) into the ersatz ground, one finds, among other things, the machine that causes the simulated earthquake. If one traces the ersatz river to its source, one finds a (probably very big) faucet. If one cuts open one of the ersatz plants or animals or ersatz people, one finds (perhaps) a mass of electronic circuitry rather than flesh and organs. One might ask why there cannot be "pure" indistinguishability in Ideal Disneyland, but to have such would violate the first Disney Principle given above; viz., that there are no real grounds, rivers, plants, animals, people, etc. I take it that one of the implications of Leibniz's Law is that if the ersatz entities were completely indistinguishable from real ones, then they would be real ones as well, and, by implication, that this land would not be Disneyland but, rather, part of Anaheim, CA itself. So, we are forced to say that only within certain constraints of normal action (where normal does not include digging too deep, following the river too far, or doing violence to the local critters) the Ideal Disneyland is indistinguishable from the real world.

3.2. Parallels with the Turing test.

One might be led by such considerations to ask, "if AI is to psychology as Disneyland is to physics, then what part of the AI-psychology relation corresponds to the constraints placed on the actions of the investigator of Ideal Disneyland?" The question is pertinent, and leads us to consider Fodor's claim that the hypothetical machine, "would be indistinguishable from the real world for the length of a conversation" (italics added). The implicit reference is clearly to the Turing Test (Turing, 1950). To review, the Turing Test is supposed to distinguish between intelligent and unintelligent computer programs by pitting them, in conversation with a human interrogator, against another human. Turing argued that, if the interrogator is unable to tell which of the interrogatees is the human and which is the computer, then there is no reason to deny full, actual intelligence to the program being tested.

Interestingly, in support of Fodor's comparison, Turing put constraints, parallel to those discussed above with reference to Disneyland, on the powers of his interrogator to decide which of the entities with which he is conversing is human, and which a computer. Namely, he stipulated that the conversation must take place remotely, via teletype, to prevent the investigative game being played from being "given away" by the merely physical, as opposed to the cognitive, characteristics of the participants.

It is nothing short of paradoxical that the cutting edge of cognitivism--AI--would adopt so stringently behaviorist a test for their success. On at least one popular account of what happened in psychology in the 1950s and 1960s, behaviorists such as Skinner said, "There is no point in looking inside the black box. You will find nothing there of value," to which the incipient cognitivists (including the "Artificial Intelligentsia") replied, "Nonsense. To the degree that we understand the contents of the black box, we will understand the real determinants of behavior rather than just the statistical abstractions offered by S-R psychology."

When it came time to judge whether or not AI-ists had succeeded in producing AI, however, they resorted to the behaviorist tactic of restricting access to the inside of the black box, under the pretext that such access would give up the game. Unfortunately, however, such a restriction has other undesirable implications. If I, playing the role of Turing's interrogator, am unable to distinguish between the computer and the human, it may not be because the two are indistinguishable in principle, but rather because I don't know what questions to ask. Access to the innards of the machine might well give me crucial clues as to how I might be able show, as behaviorally as you like, the computer to be fraud. Not only is such access legitimate; it is in the very tradition that allowed cognitivism itself to bring down behaviorism. Cognitivism cannot deny such access, on pain of something very much like incoherence.

Turing's restriction was, I believe, not intended to prevent access to crucial information. It was motivated by the suspicion that once people know which of the two interrogatees is the machine, they will be biased toward accepting evidence of its un-intelligence that they would not find so decisive if kept behind a "veil of ignorance". In short, the device was intended to keep the players honest in exactly the same way that Rawls' (1971) "veil of ignorance" was intended to keep his players of the "justice game" honest. All other relevant knowledge, however, is permissible, and must, in principle, be accessible.

The question, then, reduces to one of how the relevant information--information that might show the computer to be a cognitive fraud--can be gotten to the interrogator without telling him or her which interrogatee is the machine. One way to accomplish this would be to let another person--an assistant interrogator, so to speak--climb into the machine, find out what question would likely trip it up, and tell the interrogator to ask that question, but not anything about the reasons for asking it. In this way, the veil of ignorance has been maintained; the interrogator does not know which of the interrogatees is a computer. The crucial information, however, in the encrypted form of a question to be asked, is put into play. If, from the answer to the question, the interrogator is able to tell which of the interrogatees is the human and which the computer, then the program fails the test, but the "veil of ignorance" under which Turing wished the interrogator to operate has been left intact.

3.3. The interpretation of scientific discoveries.

I have, thus far, tried to explicate the comparison of AI to Disneyland, and the comparison seems to be borne out, but there are still other serious discrepancies between Ideal Disneyland and AI. One has to do with how scientific discoveries are interpreted. As described above, in order to show that Ideal Disneyland is a fraud, I could dig into the ground and expose the device that causes the simulated earthquake. In order to make my discovery a meaningful piece of data, however, I would have to compare my discovery to my prior beliefs about the causes of real earthquakes. If I had no knowledge of the geological determinants of real earthquakes, I might well dig into the Disney ground, find the relevant device, and conclude that real earthquakes are caused by such devices as well; i.e., that Disney quakes are real quakes. This is effectively what the Turing test interrogator is asked to conclude if the machine, in fact, passes the test. Contrary to the situation with earthquakes, however, we just don't know, in large part, what conditions are necessary and sufficient to the instantiation of real cognitive states and processes. Thus, when we find a certain piece of machinery underlying an artificial cognitive process, we don't know whether it shows the machine to be a fraud, or counts as a bona fide discovery about cognition.

Rather than digging down into a ready-built machine, however, AI-ists build them up from scratch, attempting to engineer them so that they will behave in certain ways we know naturally cognitive entities to behave (cognitive neuroscientists may be said to do the former). When they succeed, they conclude that they have found the architecture of real minds. The question is whether this inference is justified. I think the closeted assumption underlying this strategy is one that is borrowed--knowingly or unknowingly--from conceptual-role theories of semantics. There is an implicit belief that there is only one way a formal system could be if it were to be able to exhibit a specific, but indefinitely large, set of features. If so, any changes to the "right" system would either reduce the size of the set of data captured by it, or would be isomorphic to the original one.

John Haugeland (1985) gives the following example. If I give you only a couple of short letter strings and ask you to decode them (i.e., translate them into English), you are going to have a very difficult time because of the indeterminacies involved. You may well find two, three, or many more equally plausible alternative decodings, and have no way to choose among them. What you need are more strings in the same code. These will allow you to falsify some of the various hypotheses consistent with the initial set of strings. For instance, imagine that you are given the strings "clb" and "fjb". You would likely surmise that the last letter of both is the same. But exactly what letter it is is impossible to tell. All you know about the other two letters in each string is that none of them are the same. Thus, your decoding hypothesis would include all pairs of three-letter words that end with the same letter, and have no other identical letters among them; quite a large set.

Next imagine that the strings "bzb" and "czc" were added to your decoding set. With these two extra examples you would be able to rule out many hypotheses consistent with the first two. You now know that whatever "b" represents can begin a word as well as end it (assuming it represents the same thing in both cases), and that whatever "c" represents can end a word as well as begin it (ditto). Your knowledge of English might also lead you to suspect (though not decisively conclude) that "z" represents a vowel. Given more strings, so goes the story, you would be able to narrow down the possibilities to a precious few, and ultimately to a unique one.

Belief in a similar story, I think, implicitly guides AI research. There are endless programs that will get a computer to produce a single grammatical English sentence. By parity of argument, there must be far fewer that will enable it to produce 100 grammatical English sentences. And, ultimately, there must only be one that can enable it to produce any one of the infinitely many grammatical sentences of the English language. If you find that one program, so the assumption goes, you must have the program which guides our production of English sentences. This is, I think, the AI researcher's implicit belief.

John Searle (1992) has characterized the assumptions of the Artificial Intelligentsia somewhat similarly. He writes:

The idea [in computational cognitive science research], typically, is to program a commercial computer so that it simulates some cognitive capacity, such as vision or language. Then, if we get a good simulation ... we hypothesize that the brain computer is running the same program as the commercial computer... (p. 217)

As Searle goes on to point out, however, there are some serious difficulties with this approach:

Two things ought to worry us immediately about this project. First we would never accept this mode of explanation for any function of the brain where we actually understood how it worked at the neurobiological level [e.g., frog vision]. Second, we would not accept it for other sorts of system that we can simulate computationally [e.g., Can word processors be said to "explain" the functioning of typewriters?]. (p. 217)

To bring this discussion to bear on the main point, the AI researcher, in effect, just builds cognitive Disneyland after Disneyland, attempting to capture more and more cognitive "behavior" with each new model, in the hope that eventually one will capture it all. The plan is that the one that does so will be declared the correct one. If one could ever capture all cognitive behavior, even in a given domain of cognition, this might be a reasonable plan. On the contrary, however, one can never be sure of capturing all such behavior because it is an infinitely large set. On the critique I am developing here, the capturing of any finite subset would not be enough because, just as infinitely many theories capture any finite set of data, so infinitely many programs can be written to produce any finite set of outputs. (This argument is, incidentally, precisely analogous to the one Chomsky has directed against advocates of traditional empiricist theories of language acquisition.)

This critique is very strong. In fact, it is too strong because it bears against the whole of science, not just against AI-based cognitive science. There are infinitely many physical phenomena, for instance, to be captured by the laws of physics, but this does not prevent us from developing theories that we hold, even if tentatively, to be true. Perhaps, then, AI can exploit this similarity to save itself from Fodor's critique, if we consider AI programs to be nothing more and nothing less than automated theories of cognition. Recall that this is exactly the claim of weak AI, which does not indulge in the additional assumption of strong AI that the "right" program will somehow rise up out of the realm of the artificial and become a real mind. Note that Fodor himself believes the additional claim of strong AI, although he doesn't believe that writing programs is the right way to find the "right" program. If AI programs are considered to be just theories of cognition then it might be plausibly argued that AI is following exactly the same scientific course as physics; it's just a few centuries behind. In a superficial way, this may be so; but in what I think is a deeper way, the problem is far more profound. I think that this analogy between AI and physics does not hold. I explain this in the next section.

3.4. Are computer programs scientific theories?

Up until the last three paragraphs, I had stuck pretty closely to discussing Fodor's claim that AI is just Disneyland for cognitivists. I have now moved a little beyond those confines to a more general ontological question. As mentioned above, it is believed by many that computer programs in AI are nothing more or less than scientific theories of cognition. I will not dispute the point in principle, but I believe that they have historically not been particularly good theories in that they have not had the explanatory or predictive characteristics of good scientific theories. One thing about theories in physics, and other developed sciences, is that they have very broad scope. Consider Newton's laws of motion: they cover the movement of everything from baseballs thrown by little boys to planets orbiting distant stars. Darwin's theory of evolution covers the genesis of everything from flu germs and bread mold to Douglas firs and elephants. By comparison, AI-based cognitive theories typically cover a very small range of mental phenomena: the perceptual identification of pictures, or the solving of a single class of reasoning problems, or the latencies in reading words that occur with different frequencies in the language. The most explicit attempt to break out of this tradition--Newell, Simon, and Shaw's General Problem Solver--has become most notable for having not been particularly general in its ability to solve problems. The micro-worlds tradition of the 1970s can be seen as an unconditional surrender to the problem of narrow theoretical scope. Although micro-worlds are now a thing of the past, the faith that solutions to small problems might somehow ultimately "scale up" is still very much with us.

At the risk of seeming inconsistent, I see no reason to believe that, in principle, AI programs might one day serve as good models (as opposed the theories) of cognition; i.e., working instantiations of principles of cognition that we have discovered by other means. What I don't see is how such programs will figure significantly in getting us to that day. I do not believe that the explanation of intentionality, for instance, will come from a fancy program. The program that instantiates intentionality (if there is ever to be one) will, conversely, be developed on the basis of an analysis (still lacking) of what intentionality is, and how it can arise from an apparently non-intentional hunk of matter such as a brain or a computer.

The question that troubles us all--pro-AI and anti-AI alike--is why we in cognitive science seem to have made so little progress in answering such questions. The reason, I think, lies, at least in part, in questions of ontology. Physics, chemistry, and biology have developed widely accepted sets of entities that their theories quantify over in a way that psychology does not. To put it a little crudely, the difference is that physics knows what it is talking about--e.g., pendula and falling bodies--and to the extent that it captures the behavior of these entities with its laws, physics works. Psychology does not, in this sense, know what it is talking about. The debate still rages over whether thoughts--presumably at least one of the ground-level entities of psychology--are to be regarded as propositional attitudes, whether they belongs in our ontological vocabulary at all, and if so, how they are related to the other entities in our ontology. And if the status of thoughts is not divisive enough, consider the academic blood spilt over intentionality, consciousness, qualia, and the "frame problem". Contemplate, in contrast, the situation in physics. No one (to my knowledge) ever disagreed that explanations of falling bodies and the movements of the planets were among the goals of physics. In psychology, however, there is no similar consensus, in no small measure because the entities of psychology do not seem to be physical in any straightforward way (in spite of repeated efforts to "reduce" them to behavior, neural activity, and the like). This is not to claim that such a reduction or identification is impossible in principle; just that it has never been successfully achieved to any appreciable degree.

To this extent, it is unlikely that any AI program--no matter how clever, no matter how fascinatingly human-like in its behavior--could ever, in the current climate, gain consensus as an example of true intelligence. By those who disagreed with its basic psychological building blocks, it would be dismissed as a contrivance; a machine that is very cleverly programmed, but that cannot be seriously countenanced as an actual instance of real psychological functioning. This state of affairs is highly reminiscent of Kuhn's (1970) description of pre-paradigmatic "schools of thought". The members of the various schools consider different phenomena to be "basic", and invoke different theoretical vocabularies in order to explain these phenomena. Consequently, they have little to discuss with each other.

This is not to endorse Kuhn's broader view of science. Neither is it to suggest that all ontological questions must be answered before science can begin its work. But surely there must be some consensus on ontological matters before any theory --computerized or no--can hope to gain broad support. Cognitive science has not yet met this criterion.

3.5. What's the alternative to AI?

So, if AI computer programs are a bad methodology for cognitive science, as Fodor claims, what is better? In particular, what would Fodor give us instead? Cognitive scientists, I think, often forget that, prima facie, there is no more reason for psychologists to be interested in computers per se than they are in, say, toasters. The computational model has taken hold of psychology for specific reasons; reasons that, although they hold promise, are far from having been fully established.

These reasons are, perhaps, most clearly put forward by Fodor (1992) himself in an article intended for popular consumption in the Times Literary Supplement. There he argues that psychology has, in essence, three problems of long standing that require resolution before any real progress can be seriously expected: consciousness, intentionality, and rationality. About the first he simply says (Dennett's (1991b) recent attempt notwithstanding), "Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness" (1992, p. 5). Concerning the question of intentionality he is slightly more optimistic, but notes that it is really a question of semantics, rather than of philosophy of mind, though he admits it to be sometimes difficult to know which problem one is working on at a given instant.

The question of rationality, however, is where Fodor believes a big breakthrough may have come, viz., in the form of the Turing machine. He writes:

Having noticed [the] parallelism between thoughts and symbols [viz., that they are both intentional, though we don't exactly understand how], Turing went on to have the following perfectly stunning idea. "I'll bet", Turing (more or less) said, "that one could build a symbol manipulating machine whose changes of state are driven by the material properties of the symbols on which they operate (for example, by their weight, or their shape, or their electrical conductivity). And I'll bet one could so arrange things that these state changes are rational in the sense that, given a true symbol to play with, the machine will reliably convert it into other symbols that are also true." (1992, p. 6)

Such a system would be rational, at least in the sense that it would preserve truth from premises to conclusions. This is no small matter. If John Haugeland's (1985, chapter 1) introductory historical account is to be believed, this was no less than the essence of the psychological quests of Hobbes, Descartes, Hume, and a host of others.

Moreover, I suggest that this is really the only reason, at present, that computers are of significant interest to psychology, and that if they fail at this task (i.e., if the parallel turns out to be only an analogy, and not an identity) computers are of no more intrinsic interest to psychology than are calculators, typewriters, and pencils. I don't think, however--and apparently neither does Fodor--that questions of this sort (i.e., Can a machine be conscious? intentional? rational?) are going to be answered by simply writing programs. There is no question that one can write programs that mimic rationality, at least in a limited domain. The way to distinguish between machines that merely mimic rationality, and those that really have it, however, does not involve just writing more programs. It is done by engaging in some serious conceptual analysis about what is constitutive of true rationality. Once that question is answered, the question of actually building a machine to those specifications is practically trivial.

I have often found it useful to ask myself, in evaluating the claims made for one or another computer program by some member of the artificial intelligentsia, whether the individual involved would continue to be interested in psychological matters should there come forward a definitive proof that minds are not in any deep way like computers. If the answer is "probably not" then there is a conflict of interests that should be taken into account. In such a context, ringing proclamations that brains are "just information processors", and admonitions that "there's no need to get mystical (read: "philosophical") about minds", are to be understood as ideologies, not as discoveries. For such people, if cognition is not just computation, then it's not really worth talking about. Some of us, however, will continue to be interested in cognition even if it turns out to be something altogether different from computation.

Thus, the proper response to the question, "What would Fodor give us instead of AI?" is, presumably, "Exactly what he has given us over the last quarter century--an extended discussion of, and debate about, the sorts of phenomena that must be accounted for by any widely acceptable theory of psychology, and of whether and how those phenomena might be, in principle, instantiated in a computational system." It comes down to a question of whether we have yet done enough conceptual analysis to have a good enough idea of what we are talking about to start building machines. Cognitive machines would be the crowning glory of cognitive science; the proof that indeed we had the right theory of cognition. That building machines now is the way to achieve that goal, however, is a claim that it is permissible to doubt. Consider a comparison with the building of automobiles. The theory of the internal combustion engine came first. After that, the actual construction was, more or less, a trivial matter. Nearly half a century of AI does not seem to have brought about the same progress as did the first half century of building cars. Compare, for instance, the Indy 500--even of the 1930s--to the Loebner competition (at which computer programs are submitted to a highly constrained version the Turing Test).

4. Conclusion.

I have argued that Fodor's analogy is correct, at least in certain significant details. This assessment has been based on the discovery of significant similarities in the problems confronted by the investigator of an Ideal Disneyland and the interrogator in the Turing Test. These included the claim that indistinguishability depends on an arbitrary criterion of "normal" investigative actions, and that the interpretations that one makes of one's discoveries are, in part, a function of one's previous knowledge of the thing being investigated. I have also argued that AI's attempt to escape from the latter of these two problems, by way of borrowing a page from the book of conceptual-role semantics, is of dubious validity, and that even a reversion to the "weak AI" assertion that programs are just theories of cognition (rather than actual instantiations of it) will not save the day.

Twenty-five years ago Fodor (1968) claimed that there was "something deeply (i.e., conceptually) wrong with psychology" (p. vii). Those deep conceptual difficulties remain with us, and, tedious as it may seem to some, until they are sorted out I see little hope that a machine is going to come along either to solve or to dissolve them. Each program will be dedicated, either explicitly or implicitly, to a certain set of basic psychological entities, and to certain explications of psychological processes. Those choices, more than the behavior of the program will determine who buys in and who sells off. This is not just ideological prejudice, but a reflection of the fact that, in the final analysis, behavior is not all that we are interested in. Internal structure and function are just as important, perhaps moreso.

Despite repeated attempts over the last century to declare philosophy irrelevant, archaic, dead, or otherwise unpleasantly aromatic, it is still very much with us in psychology. Machines come and machines go but the same conceptual problems that dogged Skinner, Hull, and Watson; Heidegger, Husserl, and Brentano; Hume, Berkeley and Locke; Kant, Leibniz, and Descartes are with us still. In effect AI is the attempt to explicitly build an instance of something that is, at present, not at all well understood, and I judge the odds of success to be about on the same order as the odds of a toddler rediscovering the architectural principles governing the dome, while playing with building blocks.

References

Dennett, D. C. (1991a). Granny's campaign for safe science. In B. Loewer & G. Rey (Eds.), Meaning in mind: Fodor and his critics (pp. 255-319). Cambridge, MA: Blackwell.

Dennett, D. C. (1991b). Consciousness explained. Boston: Little, Brown, & company.

Fodor, J. A. (1968). Psychological explanation: An introduction to the philosophy of psychology. New York: Random House.

Fodor, J. A. (1981a) . The mind-body problem. Scientific American, 244, 114-123.

Fodor, J. A. (1981b). Introduction: Something on the state of the art. In Representations: Philosophical essays on the foundations of cognitive science (pp. 1-31). Cambridge, MA.: MIT Press.

Fodor, J. A. (1991). Replies. In B. Loewer & G. Rey (Eds.), Meaning in mind: Fodor and his critics (pp. 255-319). Cambridge, MA: Blackwell.

Fodor, J. A. & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3-71.

Fodor, J. (1992, July). The big idea: Can there be a science of mind? Times Literary Supplement, pp. 5-7.

Harnad, S. (1989). Minds, machines, and Searle. Journal of Experimental and Theoretical Artificial Intelligence, 1, pp.-pp.

Harnad, S. (1992). The Turing Test is not a trick: Turing indistinguishability is a scientific criterion. SIGART Bulletin, 3(4), 9-10.

Haugeland, J. (1985). Artificial intelligence: The very idea. Cambridge, MA: MIT Press

Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago: University of Chicago Press.

Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: MIT Press.

Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417-424.

Searle, J. (1984). Minds, brains, and science. Cambridge, MA: Harvard University Press.

Searle, J. (1992). The rediscovery of mind. Cambridge, MA: MIT Press.

Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 434-460.

Footnotes

1.In fact, all CF is "strong." Any version of CF that didn't take mental states to be just computational functions wouldn't be CF at all.

2. Harnad (1992) has emphatically argued that the Turing Test was not intended to be simply an illustrative game, but an actual scientific criterion of intelligence. Although I agree with Harnad's analysis of the status of the Turing Test, I believe it still to be an open question whether it is the right criterion. Others, most notably Searle (1992, p. 73), have argued strongly that a strictly behavioral criterion such as the Turing Test is not sufficient; viz., that concerns such as the physical structure of the entity in question matter as well.

3. Even Harnad's insistence on indistinguishability "in principle, for a lifetime" (1992, p. 9) is not sufficient. In principle, it must be forever, but more about this below.

4. Harnad (1989) is quite explicit about this assumption in his discussion of the "convergence argument."

5. Notice that even this conclusion is by no means assured. It will not hold true under more complex encryption schemes than "replace each letter by another letter." Precisely such schemes were faced by Turing when he was engaged by the British military to crack the German "enigma" code during World War II.