%A Stevan Harnad
%A S.J. Hanson
%A J. Lubin
%J Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology
%T Categorical Perception and the Evolution of Supervised Learning in Neural Nets
%X Some of the features of animal and human categorical perception
(CP) for color, pitch and speech are exhibited by neural net simulations of CP with
one-dimensional inputs: When a backprop net is trained to discriminate and then
categorize a set of stimuli, the second task is accomplished by "warping" the
similarity space (compressing within-category distances and expanding
between-category distances). This natural side-effect also occurs in humans and
animals. Such CP categories, consisting of named, bounded regions of similarity
space, may be the ground level out of which higher-order categories are
constructed; nets are one possible candidate for the mechanism that learns the
sensorimotor invariants that connect arbitrary names (elementary symbols?) to the
nonarbitrary shapes of objects. This paper examines how and why such
compression/expansion effects occur in neural nets.
%K categorical perception, neural nets, learning
%P 65-74
%E D. W. Powers
%E L. Reeker
%D 1991
%I Symposium on Symbol Grounding: Problems and Practice, Stanford University
%L cogprints1579