Applied Mathematics and Mechanics (English Edition) ›› 2008, Vol. 29 ›› Issue (9): 1231-1238 .doi: https://doi.org/10.1007/s10483-008-0912-z

• Articles • 上一篇    下一篇

Elman网络梯度学习法的收敛性

吴微, 徐东坡, 李正学   

  1. 大连理工大学 应用数学系,辽宁 大连 116023
  • 收稿日期:2007-12-05 修回日期:2008-07-28 出版日期:2008-09-10 发布日期:2008-09-10
  • 通讯作者: 吴微

Convergence of gradient method for Elman networks

WU Wei, XU Dong-po, LI Zheng-xue   

  1. Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, Liaoning Province, P. R. China
  • Received:2007-12-05 Revised:2008-07-28 Online:2008-09-10 Published:2008-09-10
  • Contact: WU Wei

摘要: 考虑有限样本集上Elman网络梯度学习法的确定性收敛性,证明了误差函数的单调递减性,给出了一个弱收敛性结果和一个强收敛结果,表明误差函数的梯度收敛于0,权值序列收敛于固定点.通过数值例子验证了理论结果的正确性.

关键词: 单调性, Elman神经网络, 梯度学习算法, 收敛性

Abstract: The gradient method for training Elman networks with a finite training sample set is considered.Monotonicity of the error function in the iteration is shown. Weak and strong convergence results are proved, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. A numerical example is given to support the theoretical findings.

Key words: Elman network, gradient learning algorithm, convergence, monotonicity

中图分类号: 

APS Journals | CSTAM Journals | AMS Journals | EMS Journals | ASME Journals