A Deep Deterministic Policy Gradient Optimization Approach for Multi-users Data Offloading in Wireless Powered Communication Network

被引:0
|
作者
Geng T. [1 ,2 ]
Gao A. [1 ,2 ]
Wang Q. [1 ,2 ]
Duan W. [1 ,2 ]
Hu Y. [3 ]
机构
[1] School of Electronics and Information, Northwestern Polytechnical University, Xi'an
[2] State-Province Joint Engineering Laboratory of IoT Technology and Application, Xi'an
[3] School of Electronic Control, Chang'an University, Xi'an
来源
Binggong Xuebao/Acta Armamentarii | 2021年 / 42卷 / 12期
关键词
Backscattering; Data offloading; Deep deterministic policy gradient; Reinforced learning;
D O I
10.3969/j.issn.1000-1093.2021.12.013
中图分类号
学科分类号
摘要
In the wireless powered communication network (WPCN), the wireless devices can offload data through wireless backscattering and active radio frequency transmission. How to adjust the working mode as well as manage the time allocation of ambient backscattering and active RF transmission properly is a great challenge for reducing the system transmission delay and enhancing the transmission efficiency. A deep deterministic policy gradient(DDPG) algorithm is proposed to search the best time allocation in a continuous domain, in which the data size, the channel conditions and the fairness between wireless devices are considered. The experimental results show that DDPG algorithm achieves the algorithm convergence in finite time step, and all the wireless devices can complete the data offloading at the same time by introducing Jain fairness index. Compared with the traditional Round-Robin and Greedy algorithms, DDPG algorithm can be used to reduce the average transmission delay by 77.7% and 24.2%, respectively, and the energy efficiency is largely improved especially for wireless devices with a small amount of offloading data. © 2021, Editorial Board of Acta Armamentarii. All right reserved.
引用
收藏
页码:2655 / 2663
页数:8
相关论文
共 15 条
  • [1] LU X, JIANG H, NIYATO D, Et al., Wireless-powered device-to-device communications with ambient backscattering: performance modeling and analysis[J], IEEE Transactions on Wireless Communications, 17, 3, pp. 1528-1544, (2018)
  • [2] YE Y H, SHI L Q, HU R Q Y, Et al., Energy-effificient resource allocation for wirelessly powered backscatter communications[J], IEEE Communications Letters, 23, 8, pp. 1418-1422, (2019)
  • [3] YE Y H, SHI L Q, LU G Y., User-centric energy efficiency fairness in backscatter-assisted wireless powered communication network, Journal on Communications, 41, 7, pp. 84-94, (2020)
  • [4] CHEN W Y, DING H Y, WANG S L, Et al., Ambient backscatter communications over NOMA downlink channels[J], China Communications, 17, 6, pp. 80-100, (2020)
  • [5] XIE T Y, LU B, YANG Z Z., Time allocation optimization in backscatter assisted cognitive wireless powered communication networks, Journal of Signal Processing, 34, 1, pp. 98-106, (2018)
  • [6] HOANG D T, NIYATO D, WANG P, Et al., Optimal time sharing in RF-powered backscatter cognitive radio networks, Proceedings of IEEE International Conference on Communications, (2017)
  • [7] KISHORE R, GURUGOPINATH S, SOFOTASIOS P C, Et al., Opportunistic ambient backscatter communication in RF-powered cognitive radio networks[J], IEEE Transactions on Cognitive Communications and Networking, 5, 2, pp. 413-426, (2019)
  • [8] HOU Z W, CHEN H, LI Y H, Et al., A contract-based incentive mechanism for energy harvesting-based Internet of Things, Proceedings of 2019 IEEE International Conference on Communications, (2017)
  • [9] HOANG D T, NIYATO D, WANG P, Et al., Overlay RF-powered backscatter cognitive radio networks:a game theoretic approach, Proceedings of 2019 IEEE International Conference on Communications, (2017)
  • [10] WEN X K, BI S Z, LIN X H, Et al., Throughput maximization for ambient backscatter communication: a reinforcement learning approach[C], Proceedings of 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference, pp. 997-1003, (2019)