A weight initialization method for improving training speed in feedforward neural network

被引:124
|
作者
Yam, JYF [1 ]
Chow, TWS [1 ]
机构
[1] City Univ Hong Kong, Dept Elect Engn, Tat Chee Ave, Kowloon, Peoples R China
关键词
initial weights determination; feedforward neural networks; backpropagation; linear least squares; Cauchy inequality;
D O I
10.1016/S0925-2312(99)00127-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An algorithm for determining the optimal initial weights of feedforward neural networks based on the Cauchy's inequality and a linear algebraic method is developed. The algorithm is computational efficient. The proposed method ensures that the outputs of neurons are in the active region and increases the rate of convergence. With the optimal initial weights determined, the initial error is substantially smaller and the number of iterations required to achieve the error criterion is significantly reduced. Extensive tests were performed to compare the proposed algorithm with other algorithms. In the case of the sunspots prediction, the number of iterations required for the network initialized with the proposed method was only 3.03% of those started with the next best weight initialization algorithm. (C) 2000 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:219 / 232
页数:14
相关论文
共 50 条
  • [31] FNN (feedforward neural network) training method based on robust recursive least square method
    Lim, JunSeok
    Sung, KoengMo
    ADVANCES IN NEURAL NETWORKS - ISNN 2007, PT 2, PROCEEDINGS, 2007, 4492 : 398 - +
  • [32] Evolutionary Based Weight Decaying Method for Neural Network Training
    Tsoulos, Ioannis G.
    Tzallas, Alexandros
    Tsalikakis, Dimitris
    NEURAL PROCESSING LETTERS, 2018, 47 (02) : 463 - 473
  • [33] Evolutionary Based Weight Decaying Method for Neural Network Training
    Ioannis G. Tsoulos
    Alexandros Tzallas
    Dimitris Tsalikakis
    Neural Processing Letters, 2018, 47 : 463 - 473
  • [34] Regularization and feedforward artificial neural network training with noise
    Chandra, P
    Singh, Y
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 2366 - +
  • [35] RECURRENT NEURAL-NETWORK TRAINING WITH FEEDFORWARD COMPLEXITY
    OLUROTIMI, O
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02): : 185 - 197
  • [36] Training Optimization of Feedforward Neural Network for Binary Classification
    Thawakar, Omkar
    Gajjewar, Pranav
    2019 INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATION AND INFORMATICS (ICCCI - 2019), 2019,
  • [37] Training algorithm of one feedforward wavelet neural network
    Zhao, G
    Zhao, J
    Chen, W
    Li, JP
    WAVELET ANALYSIS AND ITS APPLICATIONS, AND ACTIVE MEDIA TECHNOLOGY, VOLS 1 AND 2, 2004, : 135 - 142
  • [38] Training the Feedforward Neural Network Using Unconscious Search
    Amin-Naseri, M. R.
    Ardjmand, E.
    Weckman, G.
    2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2013,
  • [39] Quantitative Measures to Evaluate Neural Network Weight Initialization Strategies
    Ramos, Ernesto Zamora
    Nakakuni, Masanori
    Yfantis, Evangelos
    2017 IEEE 7TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE IEEE CCWC-2017, 2017,
  • [40] Controlled Dropout: a Different Dropout for Improving Training Speed on Deep Neural Network
    Ko, ByungSoo
    Kim, Han-Gyu
    Choi, Ho-Jin
    2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 972 - 977