Robust and energy-efficient RPL optimization algorithm with scalable deep reinforcement learning for IIoT

被引:0
|
作者
Wang, Ying [1 ]
Li, Yuanyuan [1 ]
Lei, Jianjun [1 ]
Shang, Fengjun [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing, Peoples R China
基金
中国国家自然科学基金;
关键词
Industrial Internet of Things; Routing protocol; Deep reinforcement learning; Attention mechanism; WIRELESS SENSOR NETWORKS;
D O I
10.1016/j.comnet.2024.110894
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The increasing complexity and quantity of the Industrial Internet of Things (IIoT) pose new challenges to the traditional routing protocol for low-power and lossy networks (RPL) in terms of dynamic management, data transmission reliability, and energy efficiency optimization. This paper proposes a scalable deep reinforcement learning (DRL) algorithm with a multi-attention actor double critic model for routing optimization (MADC) to meet the requirements of IIoT for efficient and intelligent routing decisions while improving data transmission reliability and energy efficiency. Specifically, MADC employs the centralized training and decentralized execution (CTDE) learning paradigm to decouple the model's training and inference tasks, which reduces the difficulty and computational cost of model learning and improves the training efficiency. In addition, a lightweight actor network based on multi-scale convolutional attention mechanism is designed in MADC, which can provide intelligent and real-time decision-making capabilities for resource-constrained nodes with low computational and storage complexities. Moreover, a scalable critic network utilizing multiple attention mechanisms is proposed. It is not only suitable for dynamic and changing network environments but also can more comprehensively and accurately evaluate local observation states, providing more accurate and efficient guidance for model optimization. Furthermore, MADC incorporates a double critic network architecture to mitigate potential overestimation issues during training, thereby ensuring the model's robustness and reliability. Simulation results demonstrate that MADC outperforms existing RPL optimization algorithms in terms of energy efficiency, data transmission reliability, and adaptability.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Towards an energy-efficient Data Center Network based on deep reinforcement learning
    Wang, Yang
    Li, Yutong
    Wang, Ting
    Liu, Gang
    COMPUTER NETWORKS, 2022, 210
  • [22] Distributed and Energy-Efficient Mobile Crowdsensing with Charging Stations by Deep Reinforcement Learning
    Liu, Chi Harold
    Dai, Zipeng
    Zhao, Yinuo
    Crowcroft, Jon
    Wu, Dapeng
    Leung, Kin K.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2021, 20 (01) : 130 - 146
  • [23] Deep reinforcement learning for dynamic scheduling of energy-efficient automated guided vehicles
    Zhang, Lixiang
    Yan, Yan
    Hu, Yaoguang
    JOURNAL OF INTELLIGENT MANUFACTURING, 2024, 35 (08) : 3875 - 3888
  • [24] Deep Reinforcement Learning for Energy-Efficient Edge Caching in Mobile Edge Networks
    Deng, Meng
    Huan, Zhou
    Kai, Jiang
    Zheng, Hantong
    Yue, Cao
    Peng, Chen
    CHINA COMMUNICATIONS, 2024, : 1 - 14
  • [25] Energy-Efficient Deep Reinforcement Learning Accelerator Designs for Mobile Autonomous Systems
    Lee, Juhyoung
    Kim, Changhyeon
    Han, Donghyeon
    Kim, Sangyeob
    Kim, Sangjin
    Yoo, Hoi-Jun
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [26] Deep Reinforcement Learning for Energy-Efficient Data Dissemination Through UAV Networks
    Ali, Abubakar S.
    Al-Habob, Ahmed A.
    Naser, Shimaa
    Bariah, Lina
    Dobre, Octavia A.
    Muhaidat, Sami
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2024, 5 : 5567 - 5583
  • [27] Towards an energy-efficient Data Center Network based on deep reinforcement learning
    Wang, Yang
    Li, Yutong
    Wang, Ting
    Liu, Gang
    Computer Networks, 2022, 210
  • [28] Energy-Efficient Ultra-Dense Network using Deep Reinforcement Learning
    Ju, Hyungyu
    Kim, Seungnyun
    Kim, YoungJoon
    Lee, Hyojin
    Shim, Byonghyo
    PROCEEDINGS OF THE 21ST IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (IEEE SPAWC2020), 2020,
  • [29] Energy-Efficient and Interference-Aware VNF Placement with Deep Reinforcement Learning
    Mu, Yanyan
    Wang, Lei
    Zhao, Jin
    2021 IFIP NETWORKING CONFERENCE AND WORKSHOPS (IFIP NETWORKING), 2021,
  • [30] Delay-Sensitive Energy-Efficient UAV Crowdsensing by Deep Reinforcement Learning
    Dai, Zipeng
    Liu, Chi Harold
    Han, Rui
    Wang, Guoren
    Leung, Kin K. K.
    Tang, Jian
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (04) : 2038 - 2052