Stability analysis of gradient-based neural networks for optimization problems

被引:39
|
作者
Han, QM [1 ]
Liao, LZ
Qi, HD
Qi, LQ
机构
[1] Hong Kong Baptist Univ, Dept Math, Kowloon, Hong Kong, Peoples R China
[2] Nanjing Normal Univ, Sch Math & Comp Sci, Nanjing 210097, Peoples R China
[3] Univ New S Wales, Sch Math, Sydney, NSW 2052, Australia
[4] Hong Kong Polytech Univ, Dept Appl Math, Kowloon, Hong Kong, Peoples R China
基金
澳大利亚研究理事会;
关键词
gradient-based neural network; equilibrium point; equilibrium set; asymptotic stability; exponential stability;
D O I
10.1023/A:1011245911067
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.
引用
收藏
页码:363 / 381
页数:19
相关论文
共 50 条
  • [31] Analysis of gradient-based routing protocols in sensor networks
    Faruque, J
    Psounis, K
    Helmy, A
    DISTRIBUTED COMPUTING IN SENSOR SYSTEMS, PROCEEDINGS, 2005, 3560 : 258 - 275
  • [32] A Gradient-based Continuous Method for Large-scale Optimization Problems
    Li-Zhi Liao
    Liqun Qi
    Hon Wah Tam
    Journal of Global Optimization, 2005, 31 : 271 - 286
  • [33] A Gradient-Based Search Method for Multi-objective Optimization Problems
    Gao, Weifeng
    Wang, Yiming
    Liu, Lingling
    Huang, Lingling
    INFORMATION SCIENCES, 2021, 578 : 129 - 146
  • [34] A gradient-based continuous method for large-scale optimization problems
    Liao, LZ
    Qi, LQ
    Tam, HW
    JOURNAL OF GLOBAL OPTIMIZATION, 2005, 31 (02) : 271 - 286
  • [35] Gradient-based hybrid method for multi-objective optimization problems
    Yang, Dewei
    Fan, Qinwei
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 272
  • [36] A Novel Fractional Gradient-Based Learning Algorithm for Recurrent Neural Networks
    Khan, Shujaat
    Ahmad, Jawwad
    Naseem, Imran
    Moinuddin, Muhammad
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2018, 37 (02) : 593 - 612
  • [37] A Novel Fractional Gradient-Based Learning Algorithm for Recurrent Neural Networks
    Shujaat Khan
    Jawwad Ahmad
    Imran Naseem
    Muhammad Moinuddin
    Circuits, Systems, and Signal Processing, 2018, 37 : 593 - 612
  • [38] Gradient-based neural networks for solving periodic Sylvester matrix equations
    Lv, Lingling
    Chen, Jinbo
    Zhang, Lei
    Zhang, Fengrui
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2022, 359 (18): : 10849 - 10866
  • [39] Chaotic Global Optimization by Direct Stability Control of Gradient-Based Systems
    Masuda, Kazuaki
    Kurihara, Kenzo
    2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 4690 - 4697
  • [40] Differentiable Oscillators in Recurrent Neural Networks for Gradient-Based Sequence Modeling
    Otte, Sebastian
    Butz, Martin V.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 745 - 746