Deep-Reinforcement-Learning-Based Joint Energy Replenishment and Data Collection Scheme for WRSN

被引:2
|
作者
Li, Jishan [1 ]
Deng, Zhichao [1 ]
Feng, Yong [1 ]
Liu, Nianbo [2 ]
机构
[1] Kunming Univ Sci & Technol, Yunnan Key Lab Comp Technol Applicat, Kunming 650500, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
wireless rechargeable sensor networks; unmanned aerial vehicles; deep reinforcement learning; route protocol; WIRELESS SENSOR NETWORKS; TRAJECTORY OPTIMIZATION; POWER TRANSFER; UAV; DESIGN;
D O I
10.3390/s24082386
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
With the emergence of wireless rechargeable sensor networks (WRSNs), the possibility of wirelessly recharging nodes using mobile charging vehicles (MCVs) has become a reality. However, existing approaches overlook the effective integration of node energy replenishment and mobile data collection processes. In this paper, we propose a joint energy replenishment and data collection scheme (D-JERDG) for WRSNs based on deep reinforcement learning. By capitalizing on the high mobility of unmanned aerial vehicles (UAVs), D-JERDG enables continuous visits to the cluster head nodes in each cluster, facilitating data collection and range-based charging. First, D-JERDG utilizes the K-means algorithm to partition the network into multiple clusters, and a cluster head selection algorithm is proposed based on an improved dynamic routing protocol, which elects cluster head nodes based on the remaining energy and geographical location of the cluster member nodes. Afterward, the simulated annealing (SA) algorithm determines the shortest flight path. Subsequently, the DRL model multiobjective deep deterministic policy gradient (MODDPG) is employed to control and optimize the UAV instantaneous heading and speed, effectively planning UAV hover points. By redesigning the reward function, joint optimization of multiple objectives such as node death rate, UAV throughput, and average flight energy consumption is achieved. Extensive simulation results show that the proposed D-JERDG achieves joint optimization of multiple objectives and exhibits significant advantages over the baseline in terms of throughput, time utilization, and charging cost, among other indicators.
引用
收藏
页数:23
相关论文
共 50 条
  • [21] Deep-Reinforcement-Learning-Based Intrusion Detection in Aerial Computing Networks
    Tao, Jing
    Han, Ting
    Li, Ruidong
    IEEE NETWORK, 2021, 35 (04): : 66 - 72
  • [22] Deep-Reinforcement-Learning-Based Autonomous UAV Navigation With Sparse Rewards
    Wang, Chao
    Wang, Jian
    Wang, Jingjing
    Zhang, Xudong
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 6180 - 6190
  • [23] Deep-Reinforcement-Learning-Based Energy Management Strategy for Supercapacitor Energy Storage Systems in Urban Rail Transit
    Yang, Zhongping
    Zhu, Feiqin
    Lin, Fei
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (02) : 1150 - 1160
  • [24] A Deep-Reinforcement-Learning-Based Digital Twin for Manufacturing Process Optimization
    Khdoudi, Abdelmoula
    Masrour, Tawfik
    El Hassani, Ibtissam
    El Mazgualdi, Choumicha
    SYSTEMS, 2024, 12 (02):
  • [25] Deep Reinforcement Learning for UAV-Based SDWSN Data Collection
    Karegar, Pejman A.
    Al-Hamid, Duaa Zuhair
    Chong, Peter Han Joo
    FUTURE INTERNET, 2024, 16 (11)
  • [26] Deep-Reinforcement-Learning-Based Energy-Efficient Resource Management for Social and Cognitive Internet of Things
    Yang, Helin
    Zhong, Wen-De
    Chen, Chen
    Alphones, Arokiaswami
    Xie, Xianzhong
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (06) : 5677 - 5689
  • [27] Deep-Reinforcement-Learning-Based Offloading Scheduling for Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Wang, Chao
    Min, Geyong
    Duan, Hancong
    Zhu, Qingxin
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (06) : 5449 - 5465
  • [28] Efficient Data Collection Scheme for Multi-Modal Underwater Sensor Networks Based on Deep Reinforcement Learning
    Song, Shanshan
    Liu, Jun
    Guo, Jiani
    Lin, Bin
    Ye, Qiang
    Cui, Junhong
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (05) : 6558 - 6570
  • [29] A Deep-Reinforcement-Learning-Based Recommender System for Occupant-Driven Energy Optimization in Commercial Buildings
    Wei, Peter
    Xia, Stephen
    Chen, Runfeng
    Qian, Jingyi
    Li, Chong
    Jiang, Xiaofan
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 6402 - 6413
  • [30] Integrating Local Learning to Improve Deep-Reinforcement-Learning-based Pairs Trading Strategies
    Chang, Wei-Che
    Dai, Tian-Shyr
    Chen, Ying-Ping
    Hsieh, Chin-Yi
    Chang, Yu-Wei
    Huang, Yu-Han
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 714 - 719