?url_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rft.title=Sufficient+Conditions+for+Error+Back+Flow+Convergence+in+Dynamical+Recurrent+Neural+Networks&rft.creator=Aussem%2C+Alex&rft.subject=Neural+Nets&rft.description=This+paper+extends+previous+analysis+of+the+gradient+decay+to+a+class+of+discrete-time+fully+recurrent+networks%2C+called+Dynamical+Recurrent+Neural+Networks+(DRNN)%2C+obtained+by+modelling+synapses+as+Finite+Impulse+Response+(FIR)+filters+instead+of+multiplicative+scalars.+Using+elementary+matrix+manipulations%2C+we+provide+an+upper+bound+on+the+norm+of+the+weight+matrix+ensuring+that+the+gradient+vector%2C+when+propagated+in+a+reverse+manner+in+time+through+the+error-propagation+network%2C+decays+exponentially+to+zero.+This+bounds+apply+to+all+FIR+architecture+proposals+as+well+as+fixed+point+recurrent+networks%2C+regardless+of+delay+and+connectivity.+In+addition%2C+we+show+that+the+computational+overhead+of+the+learning+algorithm+can+be+reduced+drastically+by+taking+advantage+of+the+exponential+decay+of+the+gradient.&rft.date=2000&rft.type=Conference+Paper&rft.type=PeerReviewed&rft.format=application%2Fpostscript&rft.identifier=http%3A%2F%2Fcogprints.org%2F1039%2F2%2FIJCNN2000_drnn.ps&rft.identifier=++Aussem%2C+Alex++(2000)+Sufficient+Conditions+for+Error+Back+Flow+Convergence+in+Dynamical+Recurrent+Neural+Networks.++%5BConference+Paper%5D+++++&rft.relation=http%3A%2F%2Fcogprints.org%2F1039%2F