Training the random neural network using quasi-Newton methods

被引:44
|
作者
Likas, A [1 ]
Stafylopatis, A
机构
[1] Univ Ioannina, Dept Comp Sci, GR-45110 Ioannina, Greece
[2] Natl Tech Univ Athens, Dept Elect & Comp Engn, GR-15773 Zografos, Greece
关键词
D O I
10.1016/S0377-2217(99)00482-8
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
Training in the random neural network (RNN) is generally specified as the minimization of an appropriate error function with respect to the parameters of the network (weights corresponding to positive and negative connections). We propose here a technique for error minimization that is based on the use of quasi-Newton optimization techniques. Such techniques offer more sophisticated exploitation of the gradient information compared to simple gradient descent methods, but are computationally more expensive and difficult to implement. In this work we specify the necessary details for the application of quasi-Newton methods to the training of the RNN, and provide comparative experimental results from the use of these methods to some well-known test problems, which confirm the superiority of the approach. (C) 2000 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:331 / 339
页数:9
相关论文
共 50 条
  • [1] Modified quasi-Newton methods for training neural networks
    Robitaille, B
    Marcos, B
    Veillette, M
    Payre, G
    COMPUTERS & CHEMICAL ENGINEERING, 1996, 20 (09) : 1133 - 1140
  • [2] Practical Quasi-Newton Methods for Training Deep Neural Networks
    Goldfarb, Donald
    Ren, Yi
    Bahamou, Achraf
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] Fast Neural Network Training on FPGA Using Quasi-Newton Optimization Method
    Liu, Qiang
    Liu, Jia
    Sang, Ruoyu
    Li, Jiajun
    Zhang, Tao
    Zhang, Qijun
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2018, 26 (08) : 1575 - 1579
  • [4] On quasi-Newton methods with modified quasi-Newton equation
    Xiao, Wei
    Sun, Fengjian
    PROCEEDINGS OF 2008 INTERNATIONAL PRE-OLYMPIC CONGRESS ON COMPUTER SCIENCE, VOL II: INFORMATION SCIENCE AND ENGINEERING, 2008, : 359 - 363
  • [5] Neural Network Training based on quasi-Newton Method using Nesterov's Accelerated Gradient
    Ninomiya, Hiroshi
    PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), 2016, : 51 - 54
  • [6] A Survey of Quasi-Newton Equations and Quasi-Newton Methods for Optimization
    Chengxian Xu
    Jianzhong Zhang
    Annals of Operations Research, 2001, 103 : 213 - 234
  • [7] Momentum acceleration of quasi-Newton based optimization technique for neural network training
    Mahboubi, Shahrzad
    Indrapriyadarsini, S.
    Ninomiya, Hiroshi
    Asai, Hideki
    IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2021, 12 (03): : 554 - 574
  • [8] Survey of quasi-Newton equations and quasi-Newton methods for optimization
    Xu, CX
    Zhang, JZ
    ANNALS OF OPERATIONS RESEARCH, 2001, 103 (1-4) : 213 - 234
  • [9] Momentum Acceleration of Quasi-Newton Training for Neural Networks
    Mahboubi, Shahrzad
    Indrapriyadarsini, S.
    Ninomiya, Hiroshi
    Asai, Hideki
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2019, 11671 : 268 - 281
  • [10] Quasi-Newton barrier function algorithm for artificial neural network training with bounded weights
    Trafalis, Theodore B.
    Tutunji, Tarek A.
    Artificial Neural Networks in Engineering - Proceedings (ANNIE'94), 1994, 4 : 161 - 166