Stability analysis of gradient-based neural networks for optimization problems

被引:39
|
作者
Han, QM [1 ]
Liao, LZ
Qi, HD
Qi, LQ
机构
[1] Hong Kong Baptist Univ, Dept Math, Kowloon, Hong Kong, Peoples R China
[2] Nanjing Normal Univ, Sch Math & Comp Sci, Nanjing 210097, Peoples R China
[3] Univ New S Wales, Sch Math, Sydney, NSW 2052, Australia
[4] Hong Kong Polytech Univ, Dept Appl Math, Kowloon, Hong Kong, Peoples R China
基金
澳大利亚研究理事会;
关键词
gradient-based neural network; equilibrium point; equilibrium set; asymptotic stability; exponential stability;
D O I
10.1023/A:1011245911067
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.
引用
收藏
页码:363 / 381
页数:19
相关论文
共 50 条
  • [21] Correcting gradient-based interpretations of deep neural networks for genomics
    Majdandzic, Antonio
    Rajesh, Chandana
    Koo, Peter K.
    GENOME BIOLOGY, 2023, 24 (01)
  • [22] Parallel implementation of gradient-based neural networks for SVM training
    Ferreira, Leonardo V.
    Kaszkurewicz, Eugenius
    Bhaya, Amit
    2006 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORK PROCEEDINGS, VOLS 1-10, 2006, : 339 - +
  • [23] A Gradient-Based Coverage Optimization Strategy for Mobile Sensor Networks
    Habibi, Jalal
    Mahboubi, Hamid
    Aghdam, Amir G.
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2017, 4 (03): : 477 - 488
  • [24] Gradient-based optimization of hyperparameters
    Bengio, Y
    NEURAL COMPUTATION, 2000, 12 (08) : 1889 - 1900
  • [25] Gradient-based learning and optimization
    Cao, XR
    PROCEEDINGS OF THE 17TH INTERNATIONAL SYMPOSIUM ON COMPUTER AND INFORMATION SCIENCES, 2003, : 3 - 7
  • [26] Gradient-based simulation optimization
    Kim, Sujin
    PROCEEDINGS OF THE 2006 WINTER SIMULATION CONFERENCE, VOLS 1-5, 2006, : 159 - 167
  • [27] Gradient-based elephant herding optimization for cluster analysis
    Yuxian Duan
    Changyun Liu
    Song Li
    Xiangke Guo
    Chunlin Yang
    Applied Intelligence, 2022, 52 : 11606 - 11637
  • [28] Gradient-based elephant herding optimization for cluster analysis
    Duan, Yuxian
    Liu, Changyun
    Li, Song
    Guo, Xiangke
    Yang, Chunlin
    APPLIED INTELLIGENCE, 2022, 52 (10) : 11606 - 11637
  • [29] Comprehensive analysis of gradient-based hyperparameter optimization algorithms
    Bakhteev, O. Y.
    Strijov, V. V.
    ANNALS OF OPERATIONS RESEARCH, 2020, 289 (01) : 51 - 65
  • [30] Comprehensive analysis of gradient-based hyperparameter optimization algorithms
    O. Y. Bakhteev
    V. V. Strijov
    Annals of Operations Research, 2020, 289 : 51 - 65