Onboard Double Q-Learning for Airborne Data Capture in Wireless Powered IoT Networks

被引:13
|
作者
Li, Kai [1 ]
Ni, Wei [2 ]
Wei, Bo [3 ]
Tovar, Eduardo [1 ]
机构
[1] Li, Kai
[2] Ni, Wei
[3] Wei, Bo
[4] Tovar, Eduardo
来源
Li, Kai (kai@isep.ipp.pt) | 1600年 / Institute of Electrical and Electronics Engineers Inc., United States卷 / 02期
关键词
Data acquisition - Packet loss - Fading channels - Unmanned aerial vehicles (UAV) - Antennas - Intelligent systems - Energy transfer - Scheduling algorithms - Learning algorithms;
D O I
10.1109/LNET.2020.2989130
中图分类号
学科分类号
摘要
This letter studies the use of Unmanned Aerial Vehicles (UAVs) in Internet-of-Things (IoT) networks, where the UAV with microwave power transfer (MPT) capability is employed to hover over the area of interest, charging IoT nodes remotely and collecting their data. Scheduling MPT and data transmission is critical to reduce the data packet loss resulting from buffer overflows and channel fading. In practice, the prior knowledge of the battery level and data queue length of the IoT nodes is not available at the UAV. A new onboard double Q-learning scheduling algorithm is proposed to optimally select the IoT node to be interrogated for data collection and MPT along the flight trajectory of the UAV, thereby minimizing asymptotically the packet loss of the IoT networks. Simulations confirm the superiority of our algorithm to Q-learning based alternatives in terms of packet loss and learning efficiency/speed. © 2019 IEEE.
引用
收藏
页码:71 / 75
相关论文
共 50 条
  • [41] Real-Time Data Transmission Scheduling Algorithm for Wireless Sensor Networks Based on Deep Q-Learning
    Zhang, Aiqi
    Sun, Meiyi
    Wang, Jiaqi
    Li, Zhiyi
    Cheng, Yanbo
    Wang, Cheng
    ELECTRONICS, 2022, 11 (12)
  • [42] Delay-aware data fusion in duty-cycled wireless sensor networks: A Q-learning approach
    Donta, Praveen Kumar
    Amgoth, Tarachand
    Annavarapu, Chandra Sekhara Rao
    SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2022, 33
  • [43] On the source-to-target gap of robust double deep Q-learning in digital twin-enabled wireless networks
    McManus, Maxwell
    Guan, Zhangyu
    Mastronarde, Nicholas
    Zou, Shaofeng
    BIG DATA IV: LEARNING, ANALYTICS, AND APPLICATIONS, 2022, 12097
  • [44] Maximizing Opinion Polarization Using Double Deep Q-Learning in Social Networks
    Zareer, Mohamed N.
    Selmic, Rastko R.
    IEEE ACCESS, 2025, 13 : 57398 - 57412
  • [45] Improving the efficiency of reinforcement learning for a spacecraft powered descent with Q-learning
    Wilson, Callum
    Riccardi, Annalisa
    OPTIMIZATION AND ENGINEERING, 2023, 24 (01) : 223 - 255
  • [46] Improving the efficiency of reinforcement learning for a spacecraft powered descent with Q-learning
    Callum Wilson
    Annalisa Riccardi
    Optimization and Engineering, 2023, 24 : 223 - 255
  • [47] Optimization of NB-IoT Uplink Resource Allocation via Double Deep Q-Learning
    Zhong, Han
    Zhang, Runzhou
    Jin, Fan
    Ning, Lei
    COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, VOL. 1, 2022, 878 : 775 - 781
  • [48] Q-learning Energy Management Strategy for TEG-powered Environmental Monitoring IoT Devices: A Pilot Study
    Prauzek, Michal
    Konecny, Jaromir
    Paterova, Tereza
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 211 - 216
  • [49] Full-Duplex Wireless Powered IoT Networks
    Kang, Kang
    Ye, Rong
    Pan, Zhenni
    Liu, Jiang
    Shimamoto, Shigeru
    IEEE ACCESS, 2018, 6 : 53546 - 53556
  • [50] Variational quantum compiling with double Q-learning
    He, Zhimin
    Li, Lvzhou
    Zheng, Shenggen
    Li, Yongyao
    Situ, Haozhen
    NEW JOURNAL OF PHYSICS, 2021, 23 (03):