---
abstract: |-
Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules
operating only on the symbols' shapes , which are arbitrary in relation to what they can be interpreted as meaning. Even if one
accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer,
because not everything can be given a systematic interpretation; and certainly everything can't be given every systematic
interpretation. But even after computers and computation have been successfully distinguished from other kinds of things, mental
states will not just be the implementations of the right symbol systems, because of the symbol grounding problem: The
interpretation of a symbol system is not intrinsic to the system; it is projected onto it by the interpreter. This is not true of our
thoughts. We must accordingly be more than just computers. My guess is that the meanings of our symbols are grounded in the
substrate of our robotic capacity to interact with that real world of objects, events and states of affairs that our symbols are
systematically interpretable as being about.
altloc:
- http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.computation.cognition.html
chapter: ~
commentary: ~
commref: ~
confdates: ~
conference: ~
confloc: ~
contact_email: ~
creators_id: []
creators_name:
- family: Harnad
given: Stevan
honourific: ''
lineage: ''
date: 1994
date_type: published
datestamp: 2001-06-18
department: ~
dir: disk0/00/00/15/92
edit_lock_since: ~
edit_lock_until: ~
edit_lock_user: ~
editors_id: []
editors_name: []
eprint_status: archive
eprintid: 1592
fileinfo: /style/images/fileicons/text_html.png;/1592/1/harnad94.computation.cognition.html
full_text_status: public
importid: ~
institution: ~
isbn: ~
ispublished: pub
issn: ~
item_issues_comment: []
item_issues_count: 0
item_issues_description: []
item_issues_id: []
item_issues_reported_by: []
item_issues_resolved_by: []
item_issues_status: []
item_issues_timestamp: []
item_issues_type: []
keywords: |-
Church/Turing Thesis, cognition, computation, consciousness, discrete systems, dynamical systems,
implementation-independence, robotics, semantic interpretability, sensorimotor transduction, symbol grounding problem, Turing
Machine, Turing Test.
lastmod: 2011-03-11 08:54:41
latitude: ~
longitude: ~
metadata_visibility: show
note: ~
number: ~
pagerange: 379-390
pubdom: FALSE
publication: Minds and Machines
publisher: ~
refereed: TRUE
referencetext: |-
Andrews, J., Livingston, K., Harnad, S. & Fischer, U. (in prep.) Categorical Perception Induced by Learning
Church, Introduction to mathematical logic. Princeton, Princeton University Press 1956
Dietrich, E. (1990) Computationalism. Social Epistemology 4: 135 - 154.
Fodor, J. A. (1975) The language of thought New York: Thomas Y. Crowell
Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive architecture: A critical appraisal. Cognition 28: 3 - 71.
Galton, A. The Church-Turing thesis: its nature and status. AISB Quarterly, Autumn 1990 (no.74):9-19.
Harnad, S. (1982) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47.
Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.
Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.
Harnad, S. (1990 a) The Symbol Grounding Problem. Physica D 42: 335-346.
Harnad, S. (1990 b) Against Computational Hermeneutics. (Invited commentary on Eric Dietrich's Computationalism) Social
Epistemology 4: 167-172.
Harnad, S. (1990 c) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and
Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.
Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1:
43-54.
Harnad, S. (1992 d) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in
Context Springer Verlag.
Harnad, S. (1992 b) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4)
(October) 9 - 10.
Harnad, S. (1993 a) Grounding Symbols in the Analog World with Neural Nets. Think 2(1) 12 - 78 (Special issue on
"Connectionism versus Symbolism," D.M.W. Powers & P.A. Flach, eds.).
Harnad, S. (1993 b) Artificial Life: Synthetic Versus Virtual. Artificial Life III. Proceedings, Santa Fe Institute Studies in the
Sciences of Complexity. Volume XVI.
Harnad, S. (1993 c) Problems, Problems: The Frame Problem as a Symptom of the Symbol Grounding Problem.
PSYCOLOQUY 4(34) frame-problem.11.
Harnad, S. (1993 d) Grounding Symbolic Capacity in Robotic Capacity. In: Steels, L. and R. Brooks (eds.) The "artificial life"
route to "artificial intelligence." Building Situated Embodied Agents. New Haven: Lawrence Erlbaum
Harnad, S. (1994 d) The Origin of Words: A Psychophysical Hypothesis In Durham, W & Velichkovsky B (Eds.) "Naturally
Human: Origins and Destiny of Language." Muenster: Nodus Pub.
Harnad, S. (1994 c) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life.
Artificial Life 1(3): 293-301.
Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and the Evolution of Supervised Learning in Neural Nets. In:
Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L
Reeker, Eds.) pp. 65-74. Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March
1991; also reprinted as Document D91-09, Deutsches Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG.
Harnad, S. Hanson, S.J. & Lubin, J. (1994) Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding.
In: V. Honavar & L. Uhr (eds) Symbol Processors and Connectionist Network Models in Artificial Intelligence and Cognitive
Modelling: Steps Toward Principled Integration. pp. 191-206. Academic Press.
Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on Virtual Mind. Minds and Machines 2: 217-238.
Nagel, T. (1974) What is it like to be a bat? Philosophical Review 83: 435 - 451.
Nagel, T. (1986) The view from nowhere. New York: Oxford University Press.
Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4: 135 - 83
Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: MIT/Bradford
Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424.
Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines. A. Anderson (ed.), Engelwood Cliffs NJ:
Prentice Hall.
Turing, A. M. (1990) Mechanical intelligence (D. C. Ince, ed.) North Holland
relation_type: []
relation_uri: []
reportno: ~
rev_number: 8
series: ~
source: ~
status_changed: 2007-09-12 16:38:53
subjects:
- cog-psy
- comp-sci-art-intel
- comp-sci-neural-nets
- comp-sci-robot
- phil-mind
succeeds: ~
suggestions: ~
sword_depositor: ~
sword_slug: ~
thesistype: ~
title: "Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't"
type: journalp
userid: 63
volume: 4