Onboard Double Q-Learning for Airborne Data Capture in Wireless Powered IoT Networks

被引:13
|
作者
Li, Kai [1 ]
Ni, Wei [2 ]
Wei, Bo [3 ]
Tovar, Eduardo [1 ]
机构
[1] Li, Kai
[2] Ni, Wei
[3] Wei, Bo
[4] Tovar, Eduardo
来源
Li, Kai (kai@isep.ipp.pt) | 1600年 / Institute of Electrical and Electronics Engineers Inc., United States卷 / 02期
关键词
Data acquisition - Packet loss - Fading channels - Unmanned aerial vehicles (UAV) - Antennas - Intelligent systems - Energy transfer - Scheduling algorithms - Learning algorithms;
D O I
10.1109/LNET.2020.2989130
中图分类号
学科分类号
摘要
This letter studies the use of Unmanned Aerial Vehicles (UAVs) in Internet-of-Things (IoT) networks, where the UAV with microwave power transfer (MPT) capability is employed to hover over the area of interest, charging IoT nodes remotely and collecting their data. Scheduling MPT and data transmission is critical to reduce the data packet loss resulting from buffer overflows and channel fading. In practice, the prior knowledge of the battery level and data queue length of the IoT nodes is not available at the UAV. A new onboard double Q-learning scheduling algorithm is proposed to optimally select the IoT node to be interrogated for data collection and MPT along the flight trajectory of the UAV, thereby minimizing asymptotically the packet loss of the IoT networks. Simulations confirm the superiority of our algorithm to Q-learning based alternatives in terms of packet loss and learning efficiency/speed. © 2019 IEEE.
引用
收藏
页码:71 / 75
相关论文
共 50 条
  • [31] Exploiting Q-learning in Extending the Network Lifetime of Wireless Sensor Networks with Holes
    Khanh Le
    Nguyen Thanh Hung
    Kien Nguyen
    Phi Le Nguyen
    2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 602 - 609
  • [32] Q-learning Based Network Selection for WCDMA/WLAN Heterogeneous Wireless Networks
    Xu, Yubin
    Chen, Jiamei
    Ma, Lin
    Lang, Gaiping
    2014 IEEE 79TH VEHICULAR TECHNOLOGY CONFERENCE (VTC-SPRING), 2014,
  • [33] Deep Q-learning based resource allocation in industrial wireless networks for URLLC
    Bhardwaj, Sanjay
    Ginanjar, Rizki Rivai
    Kim, Dong-Seong
    IET COMMUNICATIONS, 2020, 14 (06) : 1022 - 1027
  • [34] Hierarchical Deep Q-Learning Based Handover in Wireless Networks with Dual Connectivity
    Iturria-Rivera, Pedro Enrique
    Elsayed, Medhat
    Bavand, Majid
    Gaigalas, Raimundas
    Furr, Steve
    Erol-Kantarci, Melike
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 6553 - 6558
  • [35] A Deep Reinforcement Learning Approach for Multi-UAV-Assisted Data Collection in Wireless Powered IoT networks
    Li, Zhiming
    Liu, Juan
    Xie, Lingfu
    Wang, Xijun
    Jin, Ming
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 44 - 49
  • [36] Q-learning based routing for in-network aggregation in wireless sensor networks
    Maivizhi, Radhakrishnan
    Yogesh, Palanichamy
    WIRELESS NETWORKS, 2021, 27 (03) : 2231 - 2250
  • [37] Deep Cross-Check Q-Learning for Jamming Mitigation in Wireless Networks
    Elleuch, Ibrahim
    Pourranjbar, Ali
    Kaddoum, Georges
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (05) : 1448 - 1452
  • [38] Q-learning Enabled Intelligent Energy Attack in Sustainable Wireless Communication Networks
    Li, Long
    Luo, Yu
    Pu, Lina
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [39] Relay Selection for Wireless Cooperative Networks using Adaptive Q-learning Approach
    Yang, Ke
    Zhu, Shengxiang
    Dan, Zhenlei
    Tang, Xiaolan
    Wu, Xiaohuan
    Ouyang, Jian
    2019 CROSS STRAIT QUAD-REGIONAL RADIO SCIENCE AND WIRELESS TECHNOLOGY CONFERENCE (CSQRWC), 2019,
  • [40] Q-learning based routing for in-network aggregation in wireless sensor networks
    Radhakrishnan Maivizhi
    Palanichamy Yogesh
    Wireless Networks, 2021, 27 : 2231 - 2250