%A Harlan Harris
%A Jesse Reichler
%T Learning in the Cerebellum with Sparse Conjunctions and Linear Separator Algorithms
%X This paper investigates potential learning rules 
in the cerebellum. We review evidence that input to the cerebellum is 
sparsely expanded by granule cells into a very wide basis vector, 
and that Purkinje
cells learn to compute a linear separation using that basis.
We review learning rules employed by existing cerebellar models, and show
that recent results from Computational Learning Theory suggest that
the standard delta rule would not be efficient.
We suggest that alternative, attribute-efficient learning rules, such as 
Winnow or Incremental Delta-Bar-Delta, are more appropriate for cerebellar
modeling, and support this position with results from a computational model.
%K cerebellum, modeling, learning theory, winnow, idbd
%E Kenneth Marko
%E Paul Werbos
%D 2001
%I IEEE
%L cogprints2310