Prescribed attractivity region selection for recurrent neural networks based on deep reinforcement learning

被引:1
|
作者
Bao, Gang [1 ]
Song, Zhenyan [1 ]
Xu, Rui [1 ]
机构
[1] China Three Gorges Univ, Hubei Key Lab Cascaded Hydropower Stat Operat & C, Yichang 443002, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2024年 / 36卷 / 05期
基金
中国国家自然科学基金;
关键词
Recurrent neural networks; Attractivity region selection; Deep reinforcement learning; GLOBAL EXPONENTIAL STABILITY; TIME-VARYING DELAYS; DESIGN;
D O I
10.1007/s00521-023-09191-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recurrent neural networks' (RNNs') outputs are the same when network states converge to the same saturation region. Strong external inputs can cause the neural network to converge to a prescribed saturation region. Different from previous works, this paper employs deep reinforcement learning to obtain external inputs to make network states converge to the desired saturation region. Firstly, for five-dimensional neural networks, the deep Q learning (DQN) algorithm is used to compute the optimal external inputs that make the network state converge to the specified saturation region. When scaling to n-dimensional RNNs, the problem of dimensional disaster is encountered. Then, it proposes a batch computation of the external inputs to cope with the curse of dimensionality. At last, the proposed method is validated by numerical examples, and compared with existing methods, it shows that less conservative external inputs conditions can be obtained.
引用
收藏
页码:2399 / 2409
页数:11
相关论文
共 50 条
  • [1] Prescribed attractivity region selection for recurrent neural networks based on deep reinforcement learning
    Gang Bao
    Zhenyan Song
    Rui Xu
    Neural Computing and Applications, 2024, 36 : 2399 - 2409
  • [2] Knowledge-based recurrent neural networks in reinforcement learning
    Le, Tien Dung
    Komeda, Takashi
    Takagi, Motoki
    PROCEDINGS OF THE 11TH IASTED INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, 2007, : 169 - 174
  • [3] Deep Reinforcement Learning With Bidirectional Recurrent Neural Networks for Dynamic Spectrum Access
    Chen, Peng
    Quo, Shizeng
    Gao, Yulong
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [4] Stable reinforcement learning with recurrent neural networks
    Knight J.N.
    Anderson C.
    Journal of Control Theory and Applications, 2011, 9 (3): : 410 - 420
  • [5] Stable reinforcement learning with recurrent neural networks
    James Nate KNIGHT
    Charles ANDERSON
    Journal of Control Theory and Applications, 2011, 9 (03) : 410 - 420
  • [6] A NOVEL RANK SELECTION SCHEME IN TENSOR RING DECOMPOSITION BASED ON REINFORCEMENT LEARNING FOR DEEP NEURAL NETWORKS
    Cheng, Zhiyu
    Li, Baopu
    Fan, Yanwen
    Bao, Yingze
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3292 - 3296
  • [7] On the Expressivity of Neural Networks for Deep Reinforcement Learning
    Dong, Kefan
    Luo, Yuping
    Yu, Tianhe
    Finn, Chelsea
    Ma, Tengyu
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [8] On the Expressivity of Neural Networks for Deep Reinforcement Learning
    Dong, Kefan
    Luo, Yuping
    Yu, Tianhe
    Finn, Chelsea
    Ma, Tengyu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [9] Reinforcement Learning via Recurrent Convolutional Neural Networks
    Shankar, Tanmay
    Dwivedy, Santosha K.
    Guha, Prithwijit
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 2592 - 2597
  • [10] Supervised Learning Based Algorithm Selection for Deep Neural Networks
    Shi, Shaohuai
    Xu, Pengfei
    Chu, Xiamen
    2017 IEEE 23RD INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2017, : 344 - 351