Convergence of gradient method for Elman networks
Sequence (biology)
Zero (linguistics)
Error function
DOI:
10.1007/s10483-008-0912-z
Publication Date:
2008-09-12T15:33:41Z
AUTHORS (3)
ABSTRACT
The gradient method for training Elman networks with a finite training sample set is considered. Monotonicity of the error function in the iteration is shown. Weak and strong convergence results are proved, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. A numerical example is given to support the theoretical findings.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (20)
CITATIONS (10)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....