Two Recurrent Neural Networks With Reduced Model Complexity for Constrained l1-Norm Optimization

被引:7
|
作者
Xia, Youshen [1 ]
Wang, Jun [2 ]
Lu, Zhenyu [3 ]
Huang, Liqing [4 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Coll Artificial Intelligence, Nanjing 211544, Peoples R China
[2] City Univ Hong Kong, Dept Comp Sci, Kowloon, Hong Kong, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Jiangsu Key Lab Meteorol Observat & Informat Proc, Nanjing 210044, Peoples R China
[4] Fujian Normal Univ, Coll Math & Informat, Fuzhou 350117, Peoples R China
基金
中国国家自然科学基金;
关键词
Fast computation; linearly constrained l(1)-norm optimization; model complexity; recurrent neural network (RNN); L-1 ESTIMATION PROBLEMS; EQUATIONS; SYSTEMS;
D O I
10.1109/TNNLS.2021.3133836
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Because of the robustness and sparsity performance of least absolute deviation (LAD or l(1)) optimization, developing effective solution methods becomes an important topic. Recurrent neural networks (RNNs) are reported to be capable of effectively solving constrained l(1)-norm optimization problems, but their convergence speed is limited. To accelerate the convergence, this article introduces two RNNs, in form of continuous- and discrete-time systems, for solving l(1)-norm optimization problems with linear equality and inequality constraints. The RNNs are theoretically proven to be globally convergent to optimal solutions without any condition. With reduced model complexity, the two RNNs can significantly expedite constrained l(1)-norm optimization. Numerical simulation results show that the two RNNs spend much less computational time than related RNNs and numerical optimization algorithms for linearly constrained l(1)-norm optimization.
引用
收藏
页码:6173 / 6185
页数:13
相关论文
共 50 条
  • [1] A NEURAL-BASED NONLINEAR L1-NORM OPTIMIZATION ALGORITHM FOR DIAGNOSIS OF NETWORKS*
    He Yigang (Department of Electrical Engineering
    Journal of Electronics(China), 1998, (04) : 365 - 371
  • [2] Linearized alternating directions method for l1-norm inequality constrained l1-norm minimization
    Cao, Shuhan
    Xiao, Yunhai
    Zhu, Hong
    APPLIED NUMERICAL MATHEMATICS, 2014, 85 : 142 - 153
  • [3] Lagrange Programming Neural Network for the l1-norm Constrained Quadratic Minimization
    Lee, Ching Man
    Feng, Ruibin
    Leung, Chi-Sing
    NEURAL INFORMATION PROCESSING, PT III, 2015, 9491 : 119 - 126
  • [4] L1-Norm Batch Normalization for Efficient Training of Deep Neural Networks
    Wu, Shuang
    Li, Guoqi
    Deng, Lei
    Liu, Liu
    Wu, Dong
    Xie, Yuan
    Shi, Luping
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (07) : 2043 - 2051
  • [5] L1-norm low-rank linear approximation for accelerating deep neural networks: L1-norm low-rank linear approximation for accelerating deep neural networks
    Zhao Z.
    Wang H.
    Sun H.
    He Z.
    He, Zhihai (hezhi@missouri.edu), 2020, Elsevier B.V., Netherlands (400) : 216 - 226
  • [6] A new neural network for l1-norm programming
    Li, Cuiping
    Gao, Xingbao
    Li, Yawei
    Liu, Rui
    NEUROCOMPUTING, 2016, 202 : 98 - 103
  • [7] L1-Norm Low-Rank Matrix Decomposition by Neural Networks and Mollifiers
    Liu, Yiguang
    Yang, Songfan
    Wu, Pengfei
    Li, Chunguang
    Yang, Menglong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (02) : 273 - 283
  • [9] L1-norm low-rank linear approximation for accelerating deep neural networks
    Zhao, Zhiqun
    Wang, Hengyou
    Sun, Hao
    He, Zhihai
    NEUROCOMPUTING, 2020, 400 : 216 - 226
  • [10] Reducing the Computational Cost in l1-Norm Optimization for Cellular Automaton Model Identification
    Yamamoto, Kota
    Yamamoto, Shigeru
    2017 56TH ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS OF JAPAN (SICE), 2017, : 1018 - 1021