Delayed neural network based on a new complementarity function for the NCP

被引:0
|
作者
Li, Yuan-Min [1 ]
Lei, Tianyv [1 ]
机构
[1] Xidian Univ, Sch Math & Stat, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Nonlinear complementarity problem; Complementarity function; Gradient neural network; Delayed neural network; Compressed sensing; OPTIMIZATION PROBLEMS; NONSMOOTH; SUBJECT;
D O I
10.1016/j.eswa.2024.123980
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nonlinear complementarity problems (NCP) have been extensively studied in optimization due to its widespread applications. In this paper, we utilize the neural dynamic approach to solve the NCP. By integrating the famous FB function an NR function, we construct a new type of complementarity functions with one parameter p , which is aesthetically pleasing and easy to apply. Combined with the Lagrange multiplier method, a new type of merit function is also developed. Based on the complementarity function and merit function, we transform the NCP into an unconstrained minimization problem. Then, by KKT condition and gradient descent method, we propose a Lagrange neural network method. Under mild conditions, every equilibrium point of the proposed neural network model is a solution of the NCP. More importantly, by throwing a delay factor, we also develop a novel delayed neural network model. Both of these networks are shown to be global convergent, Lyapunov stable and exponential stable. Finally, we give some numerical experiments of the two neural network approaches and also give some applications to the compressed sensing signal reconstruction. Simulation results indicate that the parameter p in the complementarity function plays an important role on the convergence rate of the two neural networks. The delayed neural network outperforms the non-delayed neural network in some specific situations. It also demonstrates that the two networks can efficiently reconstruct the original signals.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] A neural network for the linear complementarity problem
    Liao, LZ
    MATHEMATICAL AND COMPUTER MODELLING, 1999, 29 (03) : 9 - 18
  • [22] A nonmonotone inexact smoothing Newton-type method for P0-NCP based on a parametric complementarity function
    Fang, Liang
    Tang, Jingyong
    Hu, Yunhong
    JOURNAL OF NUMERICAL MATHEMATICS, 2015, 23 (04) : 303 - 316
  • [23] Temporal sequences of patterns with an inverse function delayed neural network
    Sveholm, Johan
    Hayakawa, Yoshihiro
    Nakajima, Koji
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2006, E89A (10) : 2818 - 2824
  • [24] Network-Based Synchronization of Delayed Neural Networks
    Zhang, Yijun
    Han, Qing-Long
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2013, 60 (03) : 676 - 689
  • [25] An efficient neural network for solving convex optimization problems with a nonlinear complementarity problem function
    M. Ranjbar
    S. Effati
    S. M. Miri
    Soft Computing, 2020, 24 : 4233 - 4242
  • [26] An efficient neural network for solving convex optimization problems with a nonlinear complementarity problem function
    Ranjbar, M.
    Effati, S.
    Miri, S. M.
    SOFT COMPUTING, 2020, 24 (06) : 4233 - 4242
  • [27] Prediction of delayed renal allograft function using an artificial neural network
    Brier, ME
    Ray, PC
    Klein, JB
    NEPHROLOGY DIALYSIS TRANSPLANTATION, 2003, 18 (12) : 2655 - 2659
  • [28] Avoidance of the permanent oscillating state in the inverse function delayed neural network
    Sato, Akari
    Hayakawa, Yoshihiro
    Nakajima, Koji
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2007, E90A (10): : 2101 - 2107
  • [29] A neural network for a generalized vertical complementarity problem
    Hou, Bin
    Zhang, Jie
    Qiu, Chen
    AIMS MATHEMATICS, 2022, 7 (04): : 6650 - 6668
  • [30] Modswish: a new activation function for neural network
    Kalim, Heena
    Chug, Anuradha
    Singh, Amit Prakash
    EVOLUTIONARY INTELLIGENCE, 2024, 17 (04) : 2637 - 2647