Deep-Reinforcement-Learning-Based Joint Energy Replenishment and Data Collection Scheme for WRSN

被引:2
|
作者
Li, Jishan [1 ]
Deng, Zhichao [1 ]
Feng, Yong [1 ]
Liu, Nianbo [2 ]
机构
[1] Kunming Univ Sci & Technol, Yunnan Key Lab Comp Technol Applicat, Kunming 650500, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
wireless rechargeable sensor networks; unmanned aerial vehicles; deep reinforcement learning; route protocol; WIRELESS SENSOR NETWORKS; TRAJECTORY OPTIMIZATION; POWER TRANSFER; UAV; DESIGN;
D O I
10.3390/s24082386
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
With the emergence of wireless rechargeable sensor networks (WRSNs), the possibility of wirelessly recharging nodes using mobile charging vehicles (MCVs) has become a reality. However, existing approaches overlook the effective integration of node energy replenishment and mobile data collection processes. In this paper, we propose a joint energy replenishment and data collection scheme (D-JERDG) for WRSNs based on deep reinforcement learning. By capitalizing on the high mobility of unmanned aerial vehicles (UAVs), D-JERDG enables continuous visits to the cluster head nodes in each cluster, facilitating data collection and range-based charging. First, D-JERDG utilizes the K-means algorithm to partition the network into multiple clusters, and a cluster head selection algorithm is proposed based on an improved dynamic routing protocol, which elects cluster head nodes based on the remaining energy and geographical location of the cluster member nodes. Afterward, the simulated annealing (SA) algorithm determines the shortest flight path. Subsequently, the DRL model multiobjective deep deterministic policy gradient (MODDPG) is employed to control and optimize the UAV instantaneous heading and speed, effectively planning UAV hover points. By redesigning the reward function, joint optimization of multiple objectives such as node death rate, UAV throughput, and average flight energy consumption is achieved. Extensive simulation results show that the proposed D-JERDG achieves joint optimization of multiple objectives and exhibits significant advantages over the baseline in terms of throughput, time utilization, and charging cost, among other indicators.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] A joint task caching and computation offloading scheme based on deep reinforcement learning
    Tian, Huizi
    Zhu, Lin
    Tan, Long
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2025, 18 (01) : 26 - 26
  • [32] Deep Reinforcement Learning Based Energy Efficient Multi-UAV Data Collection for IoT Networks
    Khodaparast, Seyed Saeed
    Lu, Xiao
    Wang, Ping
    Uyen Trang Nguyen
    IEEE OPEN JOURNAL OF VEHICULAR TECHNOLOGY, 2021, 2 : 249 - 260
  • [33] Deep-Reinforcement-Learning-Based Service Placement for Video Analysis in Edge Computing
    Zhu, Qijun
    Wang, Sichen
    Huang, Hualong
    Lei, Yuchuan
    Zhan, Wenhan
    Duan, Hancong
    2023 8TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND BIG DATA ANALYTICS, ICCCBDA, 2023, : 355 - 359
  • [34] Deep-Reinforcement-Learning-Based Semantic Navigation of Mobile Robots in Dynamic Environments
    Kaestner, Linh
    Marx, Cornelius
    Lambrecht, Jens
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2020, : 1110 - 1115
  • [35] A Multi-Agent Deep-Reinforcement-Learning-Based Strategy for Safe Distributed Energy Resource Scheduling in Energy Hubs
    Zhang, Xi
    Wang, Qiong
    Yu, Jie
    Sun, Qinghe
    Hu, Heng
    Liu, Ximu
    ELECTRONICS, 2023, 12 (23)
  • [36] Holistic Deep-Reinforcement-Learning-based Training for Autonomous Navigation in Crowded Environments
    Kaestner, Linh
    Meusel, Marvin
    Bhuiyan, Teham
    Lambrecht, Jens
    2023 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, AIM, 2023, : 1302 - 1308
  • [37] Deep-reinforcement-learning-based self-organization of freely undulatory swimmers
    Yu, Huiyang
    Liu, Bo
    Wang, Chengyun
    Liu, Xuechao
    Lu, Xi-Yun
    Huang, Haibo
    PHYSICAL REVIEW E, 2022, 105 (04)
  • [38] CDDPG: A Deep-Reinforcement-Learning-Based Approach for Electric Vehicle Charging Control
    Zhang, Feiye
    Yang, Qingyu
    An, Dou
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (05) : 3075 - 3087
  • [39] Evaluation of a Deep-Reinforcement-Learning-based Controller for the Control of an Autonomous Underwater Vehicle
    Sola, Yoann
    Chaffre, Thomas
    le Chenadec, Gilles
    Sammut, Karl
    Clement, Benoit
    GLOBAL OCEANS 2020: SINGAPORE - U.S. GULF COAST, 2020,
  • [40] Deep-Reinforcement-Learning-Based Proportional Fair Scheduling Control Scheme for Underlay D2D Communication
    Budhiraja, Ishan
    Kumar, Neeraj
    Tyagi, Sudhanshu
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (05) : 3143 - 3156