---
abstract: |
  This paper investigates potential learning rules 
  in the cerebellum. We review evidence that input to the cerebellum is 
  sparsely expanded by granule cells into a very wide basis vector, 
  and that Purkinje
  cells learn to compute a linear separation using that basis.
  We review learning rules employed by existing cerebellar models, and show
  that recent results from Computational Learning Theory suggest that
  the standard delta rule would not be efficient.
  We suggest that alternative, attribute-efficient learning rules, such as 
  Winnow or Incremental Delta-Bar-Delta, are more appropriate for cerebellar
  modeling, and support this position with results from a computational model.
altloc: []
chapter: ~
commentary: ~
commref: ~
confdates: July 2001
conference: International Joint Conference on Neural Networks 2001
confloc: 'Washington, DC, USA'
contact_email: ~
creators_id: []
creators_name:
  - family: Harris
    given: Harlan
    honourific: ''
    lineage: ''
  - family: Reichler
    given: Jesse
    honourific: ''
    lineage: ''
date: 2001
date_type: published
datestamp: 2002-07-03
department: ~
dir: disk0/00/00/23/10
edit_lock_since: ~
edit_lock_until: ~
edit_lock_user: ~
editors_id: []
editors_name:
  - family: Marko
    given: Kenneth
    honourific: ''
    lineage: ''
  - family: Werbos
    given: Paul
    honourific: ''
    lineage: ''
eprint_status: archive
eprintid: 2310
fileinfo: /style/images/fileicons/application_postscript.png;/2310/2/sparsewinnow.ps
full_text_status: public
importid: ~
institution: ~
isbn: ~
ispublished: pub
issn: ~
item_issues_comment: []
item_issues_count: 0
item_issues_description: []
item_issues_id: []
item_issues_reported_by: []
item_issues_resolved_by: []
item_issues_status: []
item_issues_timestamp: []
item_issues_type: []
keywords: 'cerebellum, modeling, learning theory, winnow, idbd'
lastmod: 2011-03-11 08:54:57
latitude: ~
longitude: ~
metadata_visibility: show
note: ~
number: ~
pagerange: ~
pubdom: FALSE
publication: ~
publisher: IEEE
refereed: TRUE
referencetext: |+
  David Marr.
  A theory of cerebellar cortex.
  Journal of Physiology, 202:437--470, 1969.
  
  J. S. Albus.
  The theory of cerebellar function.
  Mathematical Bioscience, 10:25--61, 1971.
  
  James C. Houk, Jay T. Buckingham, and Andrew G. Barto.
  Models of the cerebellum and motor learning.
  Behavioral and Brain Sciences, 19:368--383, 1996.
  
  J. L. Krichmar, G. A. Ascoli, L. Hunter, and J. L. Olds.
  A mode lof cerebellar saccadic motor learning using qualitative
    reasoning.
  Lecture Notes in Computer Science, Artificial and Natural Neural
    Networks, 1240:134--145, 1997.
  
  James C. Houk and Andres G. Barto.
  Distributed sensorimotor learning.
  In G. E. Stelmach and J. Requin, editors, Tutorials in Motor
    Behavior II, pages 71--100. Elsevier Science Publishers B. V., Amsterdam,
    The Netherlands, 1992.
  
  
  Young H. Kim and Frank L. Lewis.
  Optimal design of CMAC neural-network controller for robot
    manipulators.
  IEEE Transactions on Systems, Man, and Cybernetics Part C:
    Applications and Reviews, 30(1), 2000.
  
  Toby Tyrrell and David Willshaw.
  Cerebellar cortex: its simulation and the relevance of marr's theory.
  Philosophical Transactions of the Royal Society of London, B,
    336:239--257, 1992.
  
  Nick Littlestone.
  Learning quickly when irrelevant attributes abound: A new
    linear-threshold algorithm.
  Machine Learning, 2:285--318, 1988.
  
  J. Kivinen, M. K. Warmuth, and P. Auer.
  The Perceptron algorithm versus Winnow: Linear versus
    logarithmic mistake bounds when few input variables are relevant.
  Artificial Intelligence, 97:325--343, 1997.
  
  Richard S. Sutton.
  Adapting bias by gradient descent: An incremental version of
    Delta-Bar-Delta.
  In Proceedings of the Tenth National Conference on Artificial
    Intelligence, pages 171--176. MIT Press, 1992.
  
  Robert A. Jacobs.
  Increased rates of convergence through learning rate adaptation.
  Neural Networks, 1:295--307, 1988.
  
  Nicolas Schweighofer and Michael A. Arbib.
  A model of cerebellar metaplasticity.
  Learning and Memory, 4:421--428, 1998.
  
  Valentino Braitenberg, Deflek Heck, and Fehad Sultan.
  The detection and generation of sequences as a key to cerebellar
    function: Experiments and theory.
  Behavioral and Brain Sciences, 20:229--277, 1997.
  
  T. M. Cover.
  Geometrical and statistical properties of systems of linear
    inequalities with applications in pattern recognition.
  IEEE Transactions on Electronic Computers, EC-14:326--334,  1965.
  
  J. S. Albus.
  A new approach to manipulator control: The cerebellar model
    articulation controller (CMAC).
  Trans. ASME J. Dynamic Systems, Measurement, and Control,
    97:220--227, 1975.
  
  Jyrki Kivinen and Manfred K. Warmuth.
  Exponentiated gradient versus gradient descent for linear predictors.
  Information and Computation, 132(1):1--63, 10 January 1997.
  
  Andrew R. Golding and Dan Roth.
  A Winnow based approach to context-sensitive spelling correction.
  Machine Learning, 34:107--130, 1999.
  
  Claudio Gentile and Nick Littlestone.
  The robustness of the p-norm algorithms.
  In COLT 1999, 1999.
  
relation_type: []
relation_uri: []
reportno: ~
rev_number: 10
series: ~
source: ~
status_changed: 2007-09-12 16:44:07
subjects:
  - comp-neuro-sci
  - neuro-mod
succeeds: ~
suggestions: ~
sword_depositor: ~
sword_slug: ~
thesistype: ~
title: Learning in the Cerebellum with Sparse Conjunctions and Linear Separator Algorithms
type: confpaper
userid: 3218
volume: ~