Tirassa, M., Vallana, M. (2010)
Representation and computation.
In: The pragmatics encyclopedia, ed. L. Cummings (pp. 399-402).
London and New York: Routledge.
Representation and Computation
Maurizio Tirassa and Marianna Vallana
Università di Torino
Dipartimento di Psicologia & Centro di Scienza Cognitiva
via Po, 14
10123 Torino (Italy)
email tirassa@psych.unito.it, mariannavallana@gmail.it
The notion of representation is one of the most important and controversial in psychology. Leaving aside the acceptations that were given of it in the first decades of scientific psychology – which include the works of Frederick Bartlett (1932) and even of behaviourists like Edward C. Tolman (1948), as well as the main body of Gestalt psychology – its contemporary history traces back to the cybernetic turn that took place around the middle of the 20th century. Kenneth Craik (1943) was among the first in modern psychology to argue that the mind operates not directly on external reality, but on internally created models thereof which it manipulates and uses to understand, simulate and predict world events and dynamics.
Positions of this sort fitted well into the burgeoning cognitive psychology. This discipline viewed the mind/brain as a computer and, via its close relation to artificial intelligence, was to give rise in the 1970s to cognitive science. A Turing machine (Church 1936; Turing 1936) is an abstract characterization of digital computers. It is comprised of a set of data, which is written on a tape as tokens of a finite symbolic alphabet (e.g. made of zeros and ones), and a set of procedures that operate on them. It was all too natural in the heyday of cognitive science to equate – or straightforwardly identify – Craik’s mental models with the data of a Turing machine, their constitutive elementary items with the symbols in its formal alphabet, and their manipulation on the part of the mind with the operation of its programs (e.g. Thagard 1996).
Over the decades that followed, the attempt to identify the code, or the codes, in which knowledge is supposedly represented in the mind gave rise to a major research area. The debate was far from merely philosophical or metaphorical. In an oft-quoted passage, Zenon Pylyshyn (1991: 219) stated that, differently from what happens in other sciences, ‘…in cognitive science our choice of notation is critical precisely because the theories claim that representations are written in the mind in the postulated notation: that at least some of the knowledge is explicitly represented and encoded in the notation proposed by the theory… What is sometimes not appreciated is that computational models are models of what literally goes on in the mind’.
Each representational code is well suited for a corresponding set of reasoning rules, and vice versa: the form of the data and the form of the procedures mirror each other, so that to identify the one practically means to identify the other. However, it was generally maintained that the mind’s program(s), once identified, would turn out to be comparatively simple: ‘An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behaviour over time is largely a reflection of the complexity of the environment in which it finds itself’ (Simon 1981: 64). In Herbert Simon’s famous metaphor, the mind, like the ant, is a simple set of programs, and the complex environment in which it finds itself – and which makes it appear more complex than it actually is – is the set of representations over which it operates. Therefore, the real issue was held to be the identification of the code in which the representations are ‘written in the mind’. Once this code was identified, the mind and its functioning would be substantially understood.
In the 1960s and 1970s many such codes were proposed to capture the nature of human representations: the most notable among them, apart from classical and nonclassical logic, were semantic networks (Quillian 1968; Collins and Quillian 1969; Woods 1975), production rules (Newell and Simon 1972), frames (Minsky 1974), schemata (Bobrow and Norman 1975), scripts (Schank and Abelson 1977; Schank 1980), and mental models (Johnson-Laird 1983; the phrase ‘mental models’ has a specific, more technical meaning in Johnson-Laird’s work than in Craik’s account).
Each proposed notation had its own theoretical specifications and often its own computational and empirical or experimental correlates. What all of them appeared to have in common is the idea that mental representations are coded symbolically and are structured and computable. That representations are coded symbolically means that they are, to quote Pylyshyn again, ‘written in the mind in the postulated notation’; that they are computable means that they can be the input and – once transformed by the program – the output of the mind’s functioning. Taken together these properties mean that the mind/brain is a digital computer. That representations are structured means that the elementary items of which they are composed are linked to each other in complex ways and grouped into meaningful aggregates. Knowledge of restaurants, for example, has to include or be linked to knowledge about rooms, tables, menus, waiters, dishes, money, and so on; knowledge of money has to include or be linked to knowledge about value, trade, banknotes, coins, cheques, jobs, wages, robberies and so on; knowledge of robberies has to include or be linked to knowledge about property, law, banks, guns, police, handcuffs, jails and so on. Each such node may also point to specific examples or instances of the concept which the system has encountered. Thus, an intelligent agent’s overall knowledge system consists in a huge network or graph with different types of nodes and links to connect them. This is in practice a hypertext. Computational theories of representation differ with regard to what structure the hypertext is supposed to have, what types of nodes and links it may contain, what types of inference may be drawn by the processor while it traverses the hypertext, and so on.
Other researchers, while subscribing to the computational paradigm, maintained instead that representations have an analogical nature (most notably Shepard 1980 and Kosslyn 1983) or that they can have both a symbolic and an analogical nature (Paivio 1986). These views gained popularity, albeit in a different form, when parallel distributed models of representation (later known under labels like connectionism or neural networks) were developed (Rumelhart et al. 1986; McClelland et al. 1986).
With a partial exception for the controversy about symbolic versus connectionist approaches (Fodor and Pylyshyn 1988; Smolensky 1988), the whole debate about representational codes lost much of its momentum during the 1980s and the 1990s. This was due to several reasons. It became generally understood that if the mind/brain is a digital computer, then all properly constructed representational systems are equivalent, because ultimately they all are materially reduced to the finite alphabet used by the machine that is supposed to be the mind/brain. This thought was encapsulated in the so-called physical symbol system hypothesis which stated that ‘[a] physical symbol system has the necessary and sufficient means for general intelligent action’ (Newell and Simon 1976: 41; it may be interesting to note that this is actually a postulate, not a hypothesis). It also led to the idea that successful computational intelligence – whether natural (e.g. Barkow et al. 1992; Pinker 1997) or artificial (e.g. Minsky 1985, 1991) – should probably employ different representational and reasoning subsystems according to the features of the context and of the task at hand (but see Fodor 2000).
The field of knowledge representation thus became largely a matter of sheer engineering (e.g. Davis et al. 1993; Brachman and Levesque 2004). With a major emphasis on formal and computational issues and little or no interest in psychological plausibility, knowledge representation is currently considered a province of artificial intelligence more often than of psychology or cognitive science. Simultaneously, many cognitive scientists lost interest in a research topic which was no longer meant to capture the real nature of the (human) mind.
Another reason for the decline of interest in knowledge representation outside artificial intelligence was the growing understanding of the many limits of the classical view. Let us reconsider the assumption that the mind does not operate on the world, but only on the representations of the world that it entertains. This position, which constitutes one of the foundations of computational functionalism, is known as methodological solipsism (Fodor 1980). It requires that the mind/brain be connected to the world via noncognitive subsystems known as modules (Fodor 1983). Thus, the representational and reasoning system only needs to satisfy constraints of completeness, correctness, consistency and, possibly, efficiency, while truth – or, at least, appropriateness to reality – is maintained via nonrepresentational connections to the external world.
A problem with this view is that it only functions on a closed-world assumption. This is the assumption that all that exists to the system has to be either explicitly coded in its knowledge base or formally deducible from what is coded. However, the closed-world assumption gives rise to computationally intractable problems known as the frame problem (McCarthy and Hayes 1969) and the qualification problem (McCarthy 1980). These problems follow from the requirement that each and every effect that a certain action may have or, respectively, each and every precondition that must hold for such action to be executable, must be explicitly stated in the knowledge base or formally deducible from it. Some researchers think that these two problems imply the impossibility of a computational system operating intelligently in the real open world (Searle 1980; Dreyfus 1992). Others proposed instead that they can be overcome by coding the entire body of knowledge that a computational system would need, which is in practice a description of the whole relevant universe. This was attempted, for example, with the CYC project (Lenat and Feigenbaum 1991; the name of the project comes from the ‘psych’ syllable in ‘encyclopaedia’) (see Smith 1991 for a criticism of CYC and of its underlying assumptions). It may be interesting to remark that this position also corresponds to the standard position of computational psychology and artificial intelligence that everything in the mind has to be innate: learning from experience is viewed as impossible both in natural and in artificial agents, although the solutions to this impasse seem to differ in the two cases.
A seemingly different attempt to overcome the difficulties of methodological solipsism is to work with agents so simple as to not need a knowledge base at all. Mainstream autonomous robotics rejected the whole idea of representation and claimed that cognition can and should be understood without recurring to it: internal models of the world are useless because ‘the world is its own best model’ (Brooks 1990: 6). This allowed investigators to ‘build complete creatures rather than isolated cognitive simulators’, as proposed by Rodney Brooks (1991) in the title of a paper. On the one hand, however, these creatures hardly reach the intelligence level of a simple arthropod (or of any other computer), and scaling up to the human species appears impossible for principled reasons (Kirsh 1991; Tirassa et al. 2000). On the other hand, because their control systems ultimately function on zeros and ones, autonomous robots have been interpreted as an integral part of the symbolic paradigm and therefore of the research program of classical artificial intelligence (Vera and Simon 1993).
Thus, the most radical criticism of the classical view is the claim that the mind/brain is indeed a representational organ, but that the nature of representations is not that of a formal code. John Searle (1992) argued that the representational and computational structures that have typically been theorized in cognitive science lack any acceptable ontology. Not being observable or understandable either in the third person (because all that we can objectively see is neurons or circuitries and not frames or other representational structures) or in the first person (because frames and other representational structures are ‘cognitively impenetrable’, that is, inaccessible to subjectivity or introspection), these structures just cannot exist. Searle (1983) rejected the assumption – undisputed from Craik to Simon – that the representational mind/brain operates on formal internal models detached from the world and argued instead that its main feature is intentionality (see also Brentano 1874), a term which has been variously viewed as synonymous with connectedness, aboutness, meaningfulness, semantics or straightforwardly consciousness.
The idea that representations are constructed (or simply happen) at the interaction of the conscious mind/brain and the external world is also a major tenet of the area known as situated or embodied cognition (e.g. Gibson 1979; Johnson 1987; Varela et al. 1991; Hutchins 1995; Clark 1997; Clancey 1997; Glenberg 1997; Tirassa et al. 2000). Representations here are viewed as neither structured symbolic codes nor as the objects of formal manipulation, but as (at least partially culturally constructed) artefacts that are interposed between the internal and the external worlds and that generate a continuous dynamical reconceptualization of meaning. Thus, many researchers in situated cognitive science are constructivist with regard to the nature of knowledge, which they view as a continuously renewed product of consciousness and as tightly bound to action and experience, and practitioners of phenomenology with regard to their methodology, which follows from the idea that the mind only exists in the first person immersed in time (Heidegger 1927; Merleau-Ponty 1945; Varela et al. 1991; Varela 1996).
Artificial intelligence; cognitive anthropology; cognitive psychology; cognitive science; computational pragmatics; inference; information structure; intentionality; knowledge; modularity of mind thesis; philosophy of mind; rationality; reasoning; Searle, J.
Clancey, W.J. (1997) Situated Cognition: On Human Knowledge and Computer Representations, Cambridge: Cambridge University Press.
Lindsay, P.H. and Norman, D.A. (1977) Human Information Processing, 2nd edn, New York: Academic Press.
Varela, F.J., Thompson, E. and Rosch, E. (1991) The Embodied Mind: Cognitive Science and Human Experience, Cambridge, MA: MIT Press.
Barkow, J.H., Cosmides, L. and Tooby, J. (eds) (1992) The Adapted Mind: Evolutionary Psychology and the Generation of Culture, New York and Oxford: Oxford University Press.
Bartlett, F.C. (1932) Remembering: A Study in Experimental and Social Psychology, Cambridge: Cambridge University Press.
Bobrow, D.G. and Norman, D.A. (1975) ‘Some principles of memory schemata’, in D.G. Bobrow & A. Collins (eds) Representation and Understanding, New York: Academic Press.
Brachman, R. and Levesque, H. (2004) Knowledge Representation and Reasoning, San Francisco, CA: Morgan Kaufmann.
Brentano, F. (1874) Psychologie vom Empirischen Standpunkt, Leipzig: Duncker und Humblot; trans. A.C. Rancurello, D.B. Terrell and L.L. McAlister (1995) Psychology from an Empirical Standpoint, London: Routledge.
Brooks, R.A. (1990) ‘Elephants don’t play chess’, Robotics and Autonomous Systems, 6: 3-15.
Brooks, R.A. (1991) ‘How to build complete creatures rather than isolated cognitive simulators’, in K. VanLehn (ed.) Architectures for Intelligence, Hillsdale, NJ: Lawrence Erlbaum Associates.
Church, A. (1936) ‘An unsolvable problem of elementary number theory’, American Journal of Mathematics, 58: 345-63.
Clancey, W.J. (1997) Situated Cognition: On Human Knowledge and Computer Representations, Cambridge: Cambridge University Press.
Clark, A. (1997) Being There: Putting Brain, Body, and World Together Again, Cambridge, MA: MIT Press.
Collins, A.M. and Quillian, M.R. (1969) ‘Retrieval time for semantic memories’, Journal of Verbal Learning and Verbal Behaviour, 8: 240-48.
Craik, K.J.W. (1943) The Nature of Explanation, Cambridge: Cambridge University Press.
Davis, R., Shrobe, H. and Szolovits, P. (1993) ‘What is a knowledge representation?’, AI Magazine, 14: 17-33.
Dreyfus, H.L. (1992) What Computers Still Can’t Do: A Critique of Artificial Reason, Cambridge, MA: MIT Press.
Fodor, J.A. (1980) ‘Methodological solipsism considered as a research strategy in cognitive psychology’, Behavioral and Brain Sciences, 3: 63-109; reprinted in J. Haugeland (ed.) (1981) Mind Design, Cambridge, MA: MIT Press.
Fodor, J.A. (1983) The Modularity of Mind: An Essay on Faculty Psychology, Cambridge, MA: MIT Press.
Fodor, J.A. (2000) The Mind Doesn’t Work That Way, Cambridge, MA: MIT Press.
Fodor, J.A. and Pylyshyn, Z.W. (1988) ‘Connectionism and cognitive architecture: a critical analysis’, Cognition, 28: 3-71.
Gibson, J.J. (1979) The Ecological Approach to Visual Perception, Boston, MA: Houghton Mifflin.
Glenberg, A.M. (1997) ‘What memory is for’, Behavioral and Brain Sciences, 20: 1-55.
Heidegger, M. (1927) Sein und Zeit, Tübingen: Mohr; trans. J. Macquarrie and E. Robinson (1962) Being and Time, London: SCM.
Hutchins, E. (1995) Cognition in the Wild, Cambridge, MA: MIT Press.
Johnson, M. (1987) The Body in the Mind: The Bodily Basis of Imagination, Reason and Meaning, Chicago, IL: University of Chicago Press.
Johnson-Laird, P.N. (1983) Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness, Cambridge: Cambridge University Press.
Kirsh, D. (1991) ‘Today the earwig, tomorrow man?’, Artificial Intelligence, 47: 161-84; reprinted in D. Kirsh (ed.) (1992) Foundations of Artificial Intelligence, Cambridge, MA: MIT Press.
Kosslyn, S.M. (1983) Ghosts in the Mind’s Machine, New York: Norton.
Lenat, D.B. and Feigenbaum, E.A. (1991) ‘On the thresholds of knowledge’, Artificial Intelligence, 47: 185-250; reprinted in D. Kirsh (ed.) (1992) Foundations of Artificial Intelligence, Cambridge, MA: MIT Press.
McCarthy, J. (1980) ‘Circumscription: a form of non-monotonic reasoning’, Artificial Intelligence, 13: 27-39.
McCarthy, J. and Hayes, P.J. (1969) ‘Some philosophical problems from the standpoint of artificial intelligence’, in B. Meltzer & D. Michie (eds) Machine Intelligence 4, Edinburgh: Edinburgh University Press.
McClelland, J.L., Rumelhart, D.E. and The PDP Research Group (1986) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 2: Psychological and Biological Models, Cambridge, MA: MIT Press.
Merleau-Ponty, M. (1945) Phénoménologie de la Perception, Paris: Gallimard; trans. Colin Smith (1981) Phenomenology of Perception, London: Routledge.
Minsky, M. (1974) A Framework for Representing Knowledge, MIT AI Laboratory Memo 306, Cambridge, MA; excerpts reprinted in P.H. Winston (ed.) (1975) The Psychology of Computer Vision, New York: McGraw-Hill; and also in J. Haugeland (ed.) (1981) Mind Design, Cambridge, MA: MIT Press.
Minsky, M. (1985) The Society of Mind, New York: Simon and Schuster.
Minsky, M. (1991) ‘Logical versus analogical or symbolic versus connectionist or neat versus scruffy’, AI Magazine, 12: 34-51.
Newell, A. and Simon, H.A. (1972) Human Problem Solving, Englewood Cliffs, NJ: Prentice-Hall.
Newell, A. and Simon, H.A. (1976) ‘Computer science as empirical enquiry: symbols and search’, Communications of the Association for Computing Machinery, 19: 113-26; reprinted in J. Haugeland (ed.) (1981) Mind Design, Cambridge, MA: MIT Press.
Paivio, A. (1986) Mental Representations: A Dual Coding Approach, Oxford: Oxford University Press.
Pinker, S. (1997) How the Mind Works, Harmondsworth, UK: Penguin.
Pylyshyn, Z.W. (1991) ‘The role of cognitive architectures in theories of cognition’, in K. VanLehn (ed.) Architectures for Intelligence, Hillsdale, NJ: Lawrence Erlbaum Associates.
Quillian, M.R. (1968) ‘Semantic memory’, in M. Minsky (ed.) Semantic Information Processing, Cambridge, MA: MIT Press.
Rumelhart, D.E., McClelland, J.L. and The PDP Research Group (1986) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations, Cambridge, MA: MIT Press.
Schank, R.C. (1980) ‘Language and memory’, Cognitive Science, 3: 243-84.
Schank, R.C. and Abelson, R.P. (1977) Scripts, Plans, Goals and Understanding, Hillsdale, NJ: Erlbaum.
Searle, J.R. (1980) ‘Minds, brains, and programs’, Behavioral and Brain Sciences, 3: 417-56; reprinted in J. Haugeland (ed.) (1981) Mind Design, Cambridge, MA: MIT Press.
Searle, J.R. (1983) Intentionality: An Essay in the Philosophy of Mind, Cambridge: Cambridge University Press.
Searle, J.R. (1992) The Rediscovery of the Mind, Cambridge, MA: MIT Press.
Shepard, R.N. (1980) Internal Representations: Studies in Perception, Imagery, and Cognition, Montgomery, VT: Bradford.
Simon, H.A. (1981) The Sciences of the Artificial, Cambridge, MA: MIT Press.
Smith, B.C. (1991) ‘The owl and the electric encyclopedia’, Artificial Intelligence, 47: 251-88; reprinted in D. Kirsh (ed.) (1992) Foundations of Artificial Intelligence, Cambridge, MA: MIT Press.
Smolensky, P. (1988) ‘On the proper treatment of connectionism’, Behavioral and Brain Sciences, 11: 1-23.
Thagard, P. (1996) Mind: Introduction to Cognitive Science, London: MIT Press.
Tirassa, M., Carassa, A. and Geminiani, G. (2000) ‘A theoretical framework for the study of spatial cognition’, in S. Ó Nualláin (ed.) Spatial Cognition: Foundations and Applications, Amsterdam and Philadelphia: Benjamins.
Tolman, E.C. (1948) ‘Cognitive maps in rats and men’, The Psychological Review, 55: 189-208.
Turing, A.M. (1936) ‘On computable numbers, with an application to the Entscheidungsproblem’, Proceedings of the London Mathematical Society (Second Series), 42: 230-65.
Varela, F.J. (1996) ‘A science of consciousness as if experience mattered’, in S.R. Hameroff, A.W. Kaszniak & A.C. Scott (eds) Toward a Science of Consciousness: The First Tucson Discussions and Debates, Cambridge, MA: MIT Press.
Varela, F.J., Thompson, E. and Rosch, E. (1991) The Embodied Mind: Cognitive Science and Human Experience, Cambridge, MA: MIT Press.
Vera, A.H. and Simon, H.A. (1993) ‘Situated action: a symbolic interpretation’, Cognitive Science, 17: 7-133.
Woods, W.A. (1975) ‘What’s in a link: foundations for semantic networks’, in D.G. Bobrow & A. Collins (eds) Representation and Understanding, New York: Academic Press.