A fast neural network learning with guaranteed convergence to zero system error

被引:0
|
作者
Ajimura, T
Yamada, I
Sakaniwa, K
机构
关键词
neural network; local minimum; dimension expansion; zero system error;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
It is thought that we have generally succeeded in establishing learning algorithms for neural networks, such as the back-propagation algorithm. However two major issues remain to be solved. First, there are possibilities of being trapped at a local minimum in learning. Second, the convergence rate is too slow. Chang and Ghaffar proposed to add a new hidden node, whenever stopping at a local minimum, and restart to train the new net until the error converges to zero. Their method designs newly generated weights so that the new net after introducing a new hidden node has less error than that at the original local minimum. In this paper, we propose a new method that improves their convergence rate. Our proposed method is expected to give a Lower system error and a larger error gradient magnitude than their method at a starling point of the new net, which leads to a faster convergence rate. Actually it is shown through numerical examples that the proposed method gives a much better performance than the conventional Chang and Ghaffar's method.
引用
收藏
页码:1433 / 1439
页数:7
相关论文
共 50 条
  • [31] An optimal neural network design methodology for fast convergence and vibration suppression
    Yang, SM
    Lee, GS
    JOURNAL OF INTELLIGENT MATERIAL SYSTEMS AND STRUCTURES, 1998, 9 (12) : 999 - 1008
  • [32] Local Stability and Convergence Analysis of Neural Network Controllers With Error Integral Inputs
    Fu, Xingang
    Li, Shuhui
    Wunsch, Donald C.
    Alonso, Eduardo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (07) : 3751 - 3763
  • [33] Fast neural network learning algorithms for medical applications
    Azar, Ahmad Taher
    NEURAL COMPUTING & APPLICATIONS, 2013, 23 (3-4): : 1019 - 1034
  • [34] Fast neural network learning algorithms for medical applications
    Ahmad Taher Azar
    Neural Computing and Applications, 2013, 23 : 1019 - 1034
  • [35] A fast neural network learning algorithm and its application
    Chang, PS
    Hou, HS
    PROCEEDINGS OF THE TWENTY-NINTH SOUTHEASTERN SYMPOSIUM ON SYSTEM THEORY, 1997, : 206 - 210
  • [36] A hybrid neural network learning system
    Liu, Y
    FOURTH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY, PROCEEDINGS, 2004, : 1016 - 1021
  • [37] Guaranteed Convergence Embedded System for SSSC and IPFC
    Singh, Pradeep
    Senroy, Nilanjan
    Tiwari, Rajive
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2021, 36 (03) : 2725 - 2728
  • [38] Analytical incremental learning: Fast constructive learning method for neural network
    Alfarozi, Syukron Abu Ishaq (syukron@outlook.com), 1600, Springer Verlag (9948 LNCS):
  • [39] A Fast Learning Algorithm of Self-Learning Spiking Neural Network
    Bodyanskiy, Yevgeniy
    Dolotov, Artem
    Pliss, Iryna
    Malyar, Mykola
    PROCEEDINGS OF THE 2016 IEEE FIRST INTERNATIONAL CONFERENCE ON DATA STREAM MINING & PROCESSING (DSMP), 2016, : 104 - 107
  • [40] Analytical Incremental Learning: Fast Constructive Learning Method for Neural Network
    Alfarozi, Syukron Abu Ishaq
    Setiawan, Noor Akhmad
    Adji, Teguh Bharata
    Woraratpanya, Kuntpong
    Pasupa, Kitsuchart
    Sugimoto, Masanori
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT II, 2016, 9948 : 259 - 268