Cogprints

L'Ancrage des Symboles dans le Monde Analogique a l'aide de Reseaux Neuronaux: un Modele Hybride

Harnad, Stevan (1993) L'Ancrage des Symboles dans le Monde Analogique a l'aide de Reseaux Neuronaux: un Modele Hybride. [Journal (Paginated)]

This is the latest version of this eprint.

Full text available as:

[img] HTML
46Kb

Abstract

Le modele d'ancrage propose ici est simple a recapituler. Les projections sensorielles analogiques sont les intrants des reseaux neuronaux qui doivent apprendre a connecter certaines des projections avec certains symboles (le nom de leur categorie) et certaines autres projections avec d'autres symboles (les noms d'autres categories pouvant se confondre les unes aux autres), en trouvant et en utilisant les invariants qui les representent de facon a favoriser l'accomplissement d'une categorisation juste. Les symboles ancres sont alors enfiles dans des combinaisons d'ordre superieur (descriptions symboliques ancrees) par un deuxieme processus combinatoire qui presente une difference critique a l'egard de la manipulation symbolique classique. Dans la manipulation symbolique standard (non ancree), la syntaxe est la seule contrainte a laquelle les combinaisons de symboles sont soumises et elle s'applique a la configuration (arbitraire) des symboles. Dans un systeme symbolique ancre, on doit tenir compte d'une deuxieme contrainte, celle de la forme non arbitraire des invariants sensoriels qui connectent le symbole a la projection sensorielle analogique de l'objet auquel il se rapporte. Je ne peux m'etendre sur la nature de ces systemes symboliques ancres a double contrainte , si ce n'est que pour indiquer que la perception categorielle humaine peut apporter quelques indices quant a la nature de cette interaction entre les contraintes analogiques et syntaxiques.

Item Type:Journal (Paginated)
Keywords:reseaux neuroneaux, ancrage symbolique, connectionnisme, symbolisme, Piece Chinoise de Searle, Test de Turing, robotique
Subjects:Computer Science > Dynamical Systems
Neuroscience > Neural Modelling
Philosophy > Philosophy of Mind
ID Code:2541
Deposited By: Harnad, Stevan
Deposited On:18 Oct 2002
Last Modified:11 Mar 2011 08:55

Available Versions of this Item

References in Article

Select the SEEK icon to attempt to find the referenced article. If it does not appear to be in cogprints you will be forwarded to the paracite service. Poorly formated references will probably not work.

Andrews, J., Livingston, K., Harnad, S. & Fischer, U. (in prep.) Learned Categorical Perception in Human Subjects: Implications

for Symbol Grounding.

Chomsky, N. (1980) Rules and representations. Behavioral and Brain Sciences 3 : 1-61.

Dietrich, E. (1990) Computationalism. Social Epistemology 4: 135 - 154.

Dyer, M. G. Intentionality and Computationalism: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical

Artificial Intelligence, Vol. 2, No. 4, 1990.

Fodor, J. & Pylyshyn, Z. (1988) Connectionism and cognitive architecture: A critical analysis. Cognition 28: 3 - 71.

Fodor, J. A. (1975) The language of thought New York Thomas Y. Crowell

Hanson & Burr (1990) What connectionist models learn: Learning and Representation in connectionist networks. Behavioral and

Brain Sciences 13: 471-518.

Harnad S. (1984) Verifying machines' minds. Contemporary Psychology 29: 389-391.

Harnad, S. (1987) (ed.) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990a) The Symbol Grounding Problem. Physica D 42: 335-346.

Harnad, S. (1990b) Against Computational Hermeneutics. (Invited commentary on Eric Dietrich's Computationalism) Social

Epistemology 4: 167-172.

Harnad, S. (1990c) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and

Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.

Harnad, S. (1990d) Symbols and Nets: Cooperation vs. Competition. Review of: S. Pinker and J. Mehler (Eds.) (1988)

Connections and Symbols Connection Science 2: 257-260.

Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1:

43-54.

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in

Context Springer Verlag.

Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and the Evolution of Supervised Learning in Neural Nets. In:

Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L

Reeker, Eds.) pp. 65-74. Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March

1991; also reprinted as Document D91-09, Deutsches Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG.

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on the Virtual Mind. Minds and Machines (in press)

Harnad, S. Hanson, S.J. & Lubin, J. (in prep.) Learned Categorical Perception in Neural Nets: Implications for Symbol

Grounding.

Lawrence, D. H. (1950) Acquired distinctiveness of cues: II. Selective association in a constant stimulus situation. Journal of

Experimental Psychology 40: 175 - 188.

Lubin, J., Hanson, S. & Harnad, S. (in prep.) Categorical Perception in ARTMAP Neural Networks.

McClelland, J. L., Rumelhart, D. E., and the PDP Research Group (1986) Parallel distributed processing: Explorations in the

microstructure of cognition, Volume 1. Cambridge MA: MIT/Bradford.

MacLennan, B. J. (1987) Technology independent design of neurocomputers: The universal field computer. In M. Caudill & C.

Butler (Eds.), Proceedings, IEEE First International Conference on Neural Networks (Vol. 3, pp. 39-49). New York, NY:

Institute of Electrical and Electronic Engineers.

MacLennan, B. J. (1988) Logic for the new AI. In J. H. Fetzer (Ed.), Aspects of Artificial Intelligence (pp. 163-192). Dordrecht:

Kluwer.

MacLennan, B. J. (in press-a) Continuous symbol systems: The logic of connectionism. In Daniel S. Levine and Manuel Aparicio

IV (Eds.), Neural Networks for Knowledge Representation and Inference. Hillsdale, NJ: Lawrence Erlbaum.

MacLennan, B. J. (in press-b) Characteristics of connectionist knowledge representation. Information Sciences, to appear.

Minsky, M. & Papert, S. (1969) Perceptrons: An introduction to computational geometry. Cambridge MA: MIT Press

Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4: 135 - 83.

Pinker, S & Prince, A. (1988) On language and connectionism: Analysis of a parallel distributed processing model of language

acquisition. Cognition 28(1-2): 73-193.

Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: Bradford Books

Rosenblatt, F. (1962) Principles of neurodynamics. NY: Spartan

Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424.

Searle, J. (1990) Is the brain's mind a computer program?. Scientific American 262: 26-31.

Touretzky, D. S. (ed.) (1991) Machine Learning, vol. 7, nos. 2 and 3, special double issue on ``Connectionist Approaches to

Language Learning.''

Touretzky, D. S. (1990) BoltzCONS: Dynamic symbol structures in a connectionist network. Artificial Intelligence vol. 46, pp.

5-46.

Touretzky, D. S. and Hinton, G. E. (1988) A distributed connectionist production system. Cognitive Science, vol. 12, number 3,

pp. 423-466.

Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines, A . Anderson (ed.), Engelwood Cliffs NJ:

Prentice Hall.

Metadata

Repository Staff Only: item control page