Deterministic convergence of conjugate gradient method for feedforward neural networks

被引:33
|
作者
Wang, Jian [1 ,2 ,3 ]
Wu, Wei [2 ]
Zurada, Jacek M. [1 ]
机构
[1] Univ Louisville, Dept Elect & Comp Engn, Louisville, KY 40292 USA
[2] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[3] China Univ Petr, Sch Math & Computat Sci, Dongying 257061, Peoples R China
基金
中国国家自然科学基金;
关键词
Deterministic convergence; Conjugate gradient; Backpropagation; Feedforward neural networks; EXTREME LEARNING-MACHINE; ONLINE; ALGORITHM;
D O I
10.1016/j.neucom.2011.03.016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conjugate gradient methods have many advantages in real numerical experiments, such as fast convergence and low memory requirements. This paper considers a class of conjugate gradient learning methods for backpropagation neural networks with three layers. We propose a new learning algorithm for almost cyclic learning of neural networks based on PRP conjugate gradient method. We then establish the deterministic convergence properties for three different learning modes, i.e., batch mode, cyclic and almost cyclic learning. The two deterministic convergence properties are weak and strong convergence that indicate that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. It is shown that the deterministic convergence results are based on different learning modes and dependent on different selection strategies of learning rate. Illustrative numerical examples are given to support the theoretical analysis. (C) 2011 Elsevier B.V. All rights reserved.
引用
收藏
页码:2368 / 2376
页数:9
相关论文
共 50 条
  • [1] Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks
    Zhang, Huisheng
    Zhang, Ying
    Xu, Dongpo
    Liu, Xiaodong
    COGNITIVE NEURODYNAMICS, 2015, 9 (03) : 331 - 340
  • [2] Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks
    Huisheng Zhang
    Ying Zhang
    Dongpo Xu
    Xiaodong Liu
    Cognitive Neurodynamics, 2015, 9 : 331 - 340
  • [3] Deterministic convergence of an online gradient method for neural networks
    Wu, W
    Xu, YS
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2002, 144 (1-2) : 335 - 347
  • [4] Deterministic convergence of an online gradient method for BP neural networks
    Wu, W
    Feng, GR
    Li, ZX
    Xu, YS
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2005, 16 (03): : 533 - 540
  • [5] Boundedness and Convergence of Online Gradient Method With Penalty for Feedforward Neural Networks
    Zhang, Huisheng
    Wu, Wei
    Liu, Fei
    Yao, Mingchen
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (06): : 1050 - 1054
  • [6] Convergence of an online gradient method for feedforward neural networks with stochastic inputs
    Li, ZX
    Wu, W
    Tian, YL
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2004, 163 (01) : 165 - 176
  • [7] Convergence of gradient method with momentum for two-layer feedforward neural networks
    Zhang, NM
    Wu, W
    Zheng, GF
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (02): : 522 - 525
  • [8] Convergence of Batch Gradient Method Based on the Entropy Error Function for Feedforward Neural Networks
    Yan Xiong
    Xin Tong
    Neural Processing Letters, 2020, 52 : 2687 - 2695
  • [9] Boundedness and Convergence of Online Gradient Method with Penalty for Linear Output Feedforward Neural Networks
    Zhang, Huisheng
    Wu, Wei
    NEURAL PROCESSING LETTERS, 2009, 29 (03) : 205 - 212
  • [10] Boundedness and Convergence of Online Gradient Method with Penalty for Linear Output Feedforward Neural Networks
    Huisheng Zhang
    Wei Wu
    Neural Processing Letters, 2009, 29 : 205 - 212