title: Learning in the Cerebellum with Sparse Conjunctions and Linear Separator Algorithms creator: Harris, Harlan creator: Reichler, Jesse subject: Computational Neuroscience subject: Neural Modelling description: This paper investigates potential learning rules in the cerebellum. We review evidence that input to the cerebellum is sparsely expanded by granule cells into a very wide basis vector, and that Purkinje cells learn to compute a linear separation using that basis. We review learning rules employed by existing cerebellar models, and show that recent results from Computational Learning Theory suggest that the standard delta rule would not be efficient. We suggest that alternative, attribute-efficient learning rules, such as Winnow or Incremental Delta-Bar-Delta, are more appropriate for cerebellar modeling, and support this position with results from a computational model. publisher: IEEE contributor: Marko, Kenneth contributor: Werbos, Paul date: 2001 type: Conference Paper type: PeerReviewed format: application/postscript identifier: http://cogprints.org/2310/2/sparsewinnow.ps identifier: Harris, Harlan and Reichler, Jesse (2001) Learning in the Cerebellum with Sparse Conjunctions and Linear Separator Algorithms. [Conference Paper] relation: http://cogprints.org/2310/