Dуnаmiс Sраtiоtеmроrаl Сhаrging Sсhеduling Ваsеd оn Dеер Rеinfоrсеmеnt Lеаrning fоr WRSN

被引:0
|
作者
Wang Y.-J. [1 ]
Feng Y. [1 ]
Liu M. [2 ]
Liu N.-B. [2 ]
机构
[1] Yunnan Key Laboratory of Computer Technology Applications, Kunming University of Science and Technology, Kunming
[2] School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu
来源
Ruan Jian Xue Bao/Journal of Software | 2024年 / 35卷 / 03期
关键词
charging duration; charging performance; charging sequence; deep reinforcement learning; spatiotemporal charging scheme; wireless rechargeable sensor network (WRSN);
D O I
10.13328/j.cnki.jos.006814
中图分类号
学科分类号
摘要
Efficient mobile charging scheduling is a key technology to build wireless rechargeable sensor networks (WRSN) which have long life cycle and sustainable operation ability. The existing charging methods based on reinforcement learning only consider the spatial dimension of mobile charging scheduling, i.e., the path planning of mobile chargers (MCs), while leaving out the temporal dimension of the problem, i.e., the adjustment of the charging duration, and thus these methods have suffered some performance limitations. This study proposes a dynamic spatiotemporal charging scheduling scheme based on deep reinforcement learning (SCSD) and establishes a deep reinforcement learning model for dynamic adjustment of charging sequence scheduling and charging duration. In view of the discrete charging sequence planning and continuous charging duration adjustment in mobile charging scheduling, the study uses DQN to optimize the charging sequence for nodes to be charged and calculates and dynamically adjusts the charging duration of the nodes. By optimizing the two dimensions of space and time respectively, the SCSD proposed in this study can effectively improve the charging performance while avoiding the power failure of nodes. Simulation experiments show that SCSD has significant performance advantages over several well-known typical charging schemes. © 2024 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:1485 / 1501
页数:16
相关论文
共 40 条
  • [1] Kurs A, Karalis A, Moffatt R, Joannopoulos JD, Fisher P, Soljacic M., Wireless power transfer via strongly coupled magnetic resonances, Science, 317, 5834, pp. 83-86, (2007)
  • [2] Zhang Z, Pang HL, Georgiadis A, Cecati C., Wireless power transfer—An overview, IEEE Trans. on Industrial Electronics, 66, 2, pp. 1044-1058, (2019)
  • [3] Hu C, Wang Y, Wang H., Survey on charging programming in wireless rechargeable sensor networks, Ruan Jian Xue Bao/Journal of Software, 27, 1, pp. 72-95, (2016)
  • [4] He SB, Chen JM, Jiang FC, Yau DKY, Xing GL, Sun YX., Energy provisioning in wireless rechargeable sensor networks, IEEE Trans. on Mobile Computing, 12, 10, pp. 1931-1942, (2013)
  • [5] Wang C, Li J, Ye F, Yang YY., Improve charging capability for wireless rechargeable sensor networks using resonant repeaters, Proc. of the 35th IEEE Int’l Conf. on Distributed Computing Systems, pp. 133-142, (2015)
  • [6] Cao XB, Xu WZ, Liu XX, Peng J, Liu T., A deep reinforcement learning-based on-demand charging algorithm for wireless rechargeable sensor networks, Ad Hoc Networks, 110, (2021)
  • [7] Yang MY, Liu NB, Zuo L, Feng Y, Liu MH, Gong HG, Liu M., Dynamic charging scheme problem with actor-critic reinforcement learning, IEEE Internet of Things Journal, 8, 1, pp. 370-380, (2021)
  • [8] Li X, Jin M., Charger scheduling optimization framework, Proc. of the 18th IEEE Int’l Symp. on Network Computing and Applications, pp. 1-8, (2019)
  • [9] Wei ZC, Liu F, Lyu Z, Ding X, Shi L, Xia CK., Reinforcement learning for a novel mobile charging strategy in wireless rechargeable sensor networks, Proc. of the 13th Int’l Conf. on Wireless Algorithms, Systems, and Applications, pp. 485-496, (2018)
  • [10] Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D., Continuous control with deep reinforcement learning, Proc. of the 4th Int’l Conf. on Learning Representations, (2016)