TY - GEN ID - cogprints497 UR - http://cogprints.org/497/ A1 - Smagt, P. van der Y1 - 1994/// N2 - Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-forward neural network training is a special case of function minimisation, where no explicit model of the data is assumed. Therefore, and due to the high dimensionality of the data, linearisation of the training problem through use of orthogonal basis functions is not desirable. The focus is on function minimisation on any basis. Quasi-Newton and conjugate gradient methods are reviewed, and the latter are shown to be a special case of error back-propagation with momentum term. Three feed-forward learning problems are tested with five methods. It is shown that, due to the fixed stepsize, standard error back-propagation performs well in avoiding local minima. However, by using not only the local gradient but also the second derivative of the error function a much shorter training time is required. Conjugate gradient with Powell restarts shows to be the superior method. KW - feed-forward neural network training KW - numerical optimisation techniques KW - neural function approximation KW - error back-propagation KW - conjugate gradient KW - quasi-Newton TI - Minimisation methods for training feed-forward networks SP - 1 AV - public EP - 11 ER -