Aussem, Alex (2000) Sufficient Conditions for Error Back Flow Convergence in Dynamical Recurrent Neural Networks. [Conference Paper]
Full text available as:
Postscript
92Kb |
Abstract
This paper extends previous analysis of the gradient decay to a class of discrete-time fully recurrent networks, called Dynamical Recurrent Neural Networks (DRNN), obtained by modelling synapses as Finite Impulse Response (FIR) filters instead of multiplicative scalars. Using elementary matrix manipulations, we provide an upper bound on the norm of the weight matrix ensuring that the gradient vector, when propagated in a reverse manner in time through the error-propagation network, decays exponentially to zero. This bounds apply to all FIR architecture proposals as well as fixed point recurrent networks, regardless of delay and connectivity. In addition, we show that the computational overhead of the learning algorithm can be reduced drastically by taking advantage of the exponential decay of the gradient.
Item Type: | Conference Paper |
---|---|
Keywords: | Recurrent neural networks, gradient decay, forgetting behavior |
Subjects: | Computer Science > Neural Nets |
ID Code: | 1039 |
Deposited By: | Alex, Aussem |
Deposited On: | 18 Oct 2000 |
Last Modified: | 11 Mar 2011 08:54 |
Metadata
- ASCII Citation
- Atom
- BibTeX
- Dublin Core
- EP3 XML
- EPrints Application Profile (experimental)
- EndNote
- HTML Citation
- ID Plus Text Citation
- JSON
- METS
- MODS
- MPEG-21 DIDL
- OpenURL ContextObject
- OpenURL ContextObject in Span
- RDF+N-Triples
- RDF+N3
- RDF+XML
- Refer
- Reference Manager
- Search Data Dump
- Simple Metadata
- YAML
Repository Staff Only: item control page