Onboard Double Q-Learning for Airborne Data Capture in Wireless Powered IoT Networks

被引:13
|
作者
Li, Kai [1 ]
Ni, Wei [2 ]
Wei, Bo [3 ]
Tovar, Eduardo [1 ]
机构
[1] Li, Kai
[2] Ni, Wei
[3] Wei, Bo
[4] Tovar, Eduardo
来源
Li, Kai (kai@isep.ipp.pt) | 1600年 / Institute of Electrical and Electronics Engineers Inc., United States卷 / 02期
关键词
Data acquisition - Packet loss - Fading channels - Unmanned aerial vehicles (UAV) - Antennas - Intelligent systems - Energy transfer - Scheduling algorithms - Learning algorithms;
D O I
10.1109/LNET.2020.2989130
中图分类号
学科分类号
摘要
This letter studies the use of Unmanned Aerial Vehicles (UAVs) in Internet-of-Things (IoT) networks, where the UAV with microwave power transfer (MPT) capability is employed to hover over the area of interest, charging IoT nodes remotely and collecting their data. Scheduling MPT and data transmission is critical to reduce the data packet loss resulting from buffer overflows and channel fading. In practice, the prior knowledge of the battery level and data queue length of the IoT nodes is not available at the UAV. A new onboard double Q-learning scheduling algorithm is proposed to optimally select the IoT node to be interrogated for data collection and MPT along the flight trajectory of the UAV, thereby minimizing asymptotically the packet loss of the IoT networks. Simulations confirm the superiority of our algorithm to Q-learning based alternatives in terms of packet loss and learning efficiency/speed. © 2019 IEEE.
引用
收藏
页码:71 / 75
相关论文
共 50 条
  • [21] UAV Autonomous Navigation for Wireless Powered Data Collection with Onboard Deep Q-Network
    LI Yuting
    DING Yi
    GAO Jiangchuan
    LIU Yusha
    HU Jie
    YANG Kun
    ZTECommunications, 2023, 21 (02) : 80 - 87
  • [22] Dynamic Attack Detection in IoT Networks: An Ensemble Learning Approach With Q-Learning and Explainable AI
    Turaka, Padmasri
    Panigrahy, Saroj Kumar
    IEEE ACCESS, 2024, 12 : 161925 - 161940
  • [23] Deep Reinforcement Learning with Double Q-Learning
    van Hasselt, Hado
    Guez, Arthur
    Silver, David
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2094 - 2100
  • [24] Q-LEARNING WITH CENSORED DATA
    Goldberg, Yair
    Kosorok, Michael R.
    ANNALS OF STATISTICS, 2012, 40 (01): : 529 - 560
  • [25] Learning to Play Pac-Xon with Q-Learning and Two Double Q-Learning Variants
    Schilperoort, Jits
    Mak, Ivar
    Drugan, Madalina M.
    Wiering, Marco A.
    2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2018, : 1151 - 1158
  • [26] Resource Allocation in Wireless Powered IoT Networks
    Liu, Xiaolan
    Qin, Zhijin
    Gao, Yue
    McCann, Julie A.
    IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (03) : 4935 - 4945
  • [27] On the Estimation Bias in Double Q-Learning
    Ren, Zhizhou
    Zhu, Guangxiang
    Hu, Hao
    Han, Beining
    Chen, Jianglun
    Zhang, Chongjie
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [28] Work-in-Progress: Q-Learning Based Routing for Transiently Powered Wireless Sensor Network
    Jia, Zhenge
    Wu, Yawen
    Hu, Jingtong
    INTERNATIONAL CONFERENCE ON COMPILERS, ARCHITECTURE, AND SYNTHESIS FOR EMBEDDED SYSTEMS (CODES +ISSS) 2019, 2019,
  • [29] Q-Learning NOMA Random Access for IoT-Satellite Terrestrial Relay Networks
    Tubiana, Douglas Alisson
    Farhat, Jamil
    Brante, Glauber
    Souza, Richard Demo
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (08) : 1619 - 1623
  • [30] CLIQUE: Role-Free Clustering with Q-Learning for Wireless Sensor Networks
    Foerster, Anna
    Murphy, Amy L.
    2009 29TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, 2009, : 441 - +