Deep Reinforcement Learning for Energy-Efficient Data Dissemination Through UAV Networks

被引:0
|
作者
Ali, Abubakar S. [1 ]
Al-Habob, Ahmed A. [2 ]
Naser, Shimaa [1 ]
Bariah, Lina [3 ]
Dobre, Octavia A. [2 ]
Muhaidat, Sami [1 ,4 ]
机构
[1] Khalifa Univ, KU 6G Res Ctr, Dept Comp & Informat Engn, Abu Dhabi, U Arab Emirates
[2] Mem Univ, Dept Elect & Comp Engn, St John, NF A1C 5S7, Canada
[3] Technol Innovat Inst, Abu Dhabi, U Arab Emirates
[4] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
关键词
Autonomous aerial vehicles; Internet of Things; Data dissemination; Optimization; Energy consumption; Heuristic algorithms; Energy efficiency; deep learning; Internet-of-Things (IoT); reinforcement learning (RL); unmanned aerial vehicle (UAV); SENSOR NETWORKS; MANAGEMENT; INTERNET;
D O I
10.1109/OJCOMS.2024.3398718
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The rise of the Internet of Things (IoT), marked by unprecedented growth in connected devices, has created an insatiable demand for supplementary computational and communication resources. The integration of Unmanned aerial vehicles (UAVs) within IoT ecosystems presents a promising avenue to surmount these obstacles, offering enhanced network coverage, agile deployment capabilities, and efficient data gathering from geographically challenging locales. UAVs have been recognized as a compelling solution, offering extended coverage, flexibility, and reachability for IoT networks. Despite these benefits, UAV technology faces significant challenges, including limited energy resources, the necessity for adaptive responses to dynamic environments, and the imperative for autonomous operation to fulfill the evolving demands of IoT networks. In light of this, we introduce an innovative UAV-assisted data dissemination framework that aims to minimize the total energy expenditure, considering both the UAV and all spatially-distributed IoT devices. Our framework addresses three interconnected subproblems: device classification, device association, and path planning. For device classification, we employ two distinct types of deep reinforcement learning (DRL) agents-Double Deep Q-Network (DDQN) and Proximal Policy Optimization (PPO)-to classify devices into two tiers. To tackle device association, we propose an approach based on the nearest-neighbor heuristic to associate Tier 2 devices with a Tier 1 device. For path planning, we propose an approach that utilizes the Lin-Kernighan heuristic to plan the UAV's path among the Tier 1 devices. We compare our method with three baseline approaches and demonstrate through simulation results that our approach significantly reduces energy consumption and offers a near-optimal solution in a fraction of the time required by brute force methods and ant colony heuristics. Consequently, our framework presents an efficient and practical alternative for energy-efficient data dissemination in UAV-assisted IoT networks.
引用
收藏
页码:5567 / 5583
页数:17
相关论文
共 50 条
  • [21] Energy-Efficient Data Dissemination Using a UAV: An Ant Colony Approach
    Al-Habob, Ahmed A.
    Dobre, Octavia A.
    Muhaidat, Sami
    Vincent Poor, H.
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (01) : 16 - 20
  • [22] Towards an energy-efficient Data Center Network based on deep reinforcement learning
    Wang, Yang
    Li, Yutong
    Wang, Ting
    Liu, Gang
    COMPUTER NETWORKS, 2022, 210
  • [23] Towards an energy-efficient Data Center Network based on deep reinforcement learning
    Wang, Yang
    Li, Yutong
    Wang, Ting
    Liu, Gang
    Computer Networks, 2022, 210
  • [24] Deep Reinforcement Learning-Based UAV Path Planning for Energy-Efficient Multitier Cooperative Computing in Wireless Sensor Networks
    Guo, Zhihui
    Chen, Hongbin
    Li, Shichao
    JOURNAL OF SENSORS, 2023, 2023
  • [25] Energy-Efficient Power Allocation and User Association in Heterogeneous Networks with Deep Reinforcement Learning
    Hsieh, Chi-Kai
    Chan, Kun-Lin
    Chien, Feng-Tsun
    APPLIED SCIENCES-BASEL, 2021, 11 (09):
  • [26] Deep Reinforcement Learning for Energy-Efficient Task Offloading in Cooperative Vehicular Edge Networks
    Agbaje, Paul
    Nwafor, Ebelechukwu
    Olufowobi, Habeeb
    2023 IEEE 21ST INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS, INDIN, 2023,
  • [27] Deep Reinforcement Learning for Energy-Efficient Beamforming Design in Cell-Free Networks
    Li, Weilai
    Ni, Wanli
    Tian, Hui
    Hua, Meihui
    2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE WORKSHOPS (WCNCW), 2021,
  • [28] Secure and Energy-Efficient Communication for Internet of Drones Networks: A Deep Reinforcement Learning Approach
    Aboueleneen, Noor
    Alwarafy, Abdulmalik
    Abdallah, Mohamed
    2023 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2023, : 818 - 823
  • [29] An energy-efficient data dissemination protocol for wireless sensor networks
    Ammari, HM
    Das, SK
    FOURTH ANNUAL IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS, PROCEEDINGS, 2006, : 357 - +
  • [30] Trace Pheromone-Based Energy-Efficient UAV Dynamic Coverage Using Deep Reinforcement Learning
    Cheng, Xu
    Jiang, Rong
    Sang, Hongrui
    Li, Gang
    He, Bin
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (03) : 1063 - 1074