Deep Reinforcement Learning for Energy-Efficient Data Dissemination Through UAV Networks

被引:0
|
作者
Ali, Abubakar S. [1 ]
Al-Habob, Ahmed A. [2 ]
Naser, Shimaa [1 ]
Bariah, Lina [3 ]
Dobre, Octavia A. [2 ]
Muhaidat, Sami [1 ,4 ]
机构
[1] Khalifa Univ, KU 6G Res Ctr, Dept Comp & Informat Engn, Abu Dhabi, U Arab Emirates
[2] Mem Univ, Dept Elect & Comp Engn, St John, NF A1C 5S7, Canada
[3] Technol Innovat Inst, Abu Dhabi, U Arab Emirates
[4] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
关键词
Autonomous aerial vehicles; Internet of Things; Data dissemination; Optimization; Energy consumption; Heuristic algorithms; Energy efficiency; deep learning; Internet-of-Things (IoT); reinforcement learning (RL); unmanned aerial vehicle (UAV); SENSOR NETWORKS; MANAGEMENT; INTERNET;
D O I
10.1109/OJCOMS.2024.3398718
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The rise of the Internet of Things (IoT), marked by unprecedented growth in connected devices, has created an insatiable demand for supplementary computational and communication resources. The integration of Unmanned aerial vehicles (UAVs) within IoT ecosystems presents a promising avenue to surmount these obstacles, offering enhanced network coverage, agile deployment capabilities, and efficient data gathering from geographically challenging locales. UAVs have been recognized as a compelling solution, offering extended coverage, flexibility, and reachability for IoT networks. Despite these benefits, UAV technology faces significant challenges, including limited energy resources, the necessity for adaptive responses to dynamic environments, and the imperative for autonomous operation to fulfill the evolving demands of IoT networks. In light of this, we introduce an innovative UAV-assisted data dissemination framework that aims to minimize the total energy expenditure, considering both the UAV and all spatially-distributed IoT devices. Our framework addresses three interconnected subproblems: device classification, device association, and path planning. For device classification, we employ two distinct types of deep reinforcement learning (DRL) agents-Double Deep Q-Network (DDQN) and Proximal Policy Optimization (PPO)-to classify devices into two tiers. To tackle device association, we propose an approach based on the nearest-neighbor heuristic to associate Tier 2 devices with a Tier 1 device. For path planning, we propose an approach that utilizes the Lin-Kernighan heuristic to plan the UAV's path among the Tier 1 devices. We compare our method with three baseline approaches and demonstrate through simulation results that our approach significantly reduces energy consumption and offers a near-optimal solution in a fraction of the time required by brute force methods and ant colony heuristics. Consequently, our framework presents an efficient and practical alternative for energy-efficient data dissemination in UAV-assisted IoT networks.
引用
收藏
页码:5567 / 5583
页数:17
相关论文
共 50 条
  • [41] Energy-Efficient UAV Crowdsensing with Multiple Charging Stations by Deep Learning
    Liu, Chi Harold
    Piao, Chengzhe
    Tang, Jian
    IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2020, : 199 - 208
  • [42] CORD: Energy-efficient reliable bulk data dissemination in sensor networks
    Huang, Leijun
    Setia, Sanjeev
    27TH IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (INFOCOM), VOLS 1-5, 2008, : 1247 - 1255
  • [43] Energy-Efficient Data Dissemination Using Beamforming in Wireless Sensor Networks
    Feng, Jing
    Lu, Yung-Hsiang
    Jung, Byunghoo
    Peroulis, Dimitrios
    Hu, Y. Charlie
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2013, 9 (03)
  • [44] A robust and energy-efficient data dissemination framework for wireless sensor networks
    Liu, Wei
    Zhang, Yanchao
    Lou, Wenjing
    Fang, Yuguang
    WIRELESS NETWORKS, 2006, 12 (04) : 465 - 479
  • [45] Energy-efficient resource allocation over wireless communication systems through deep reinforcement learning
    Shukla, Kirti
    Kollu, Archana
    Panwar, Poonam
    Soni, Mukesh
    Jindal, Latika
    Patel, Hemlata
    Keshta, Ismail
    Maaliw III, Renato R. R.
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2025, 38 (01)
  • [46] Footstep reward for energy-efficient quadruped gait generation and transition through deep reinforcement learning
    Sulpice, Lucas
    Owaki, Dai
    Hayashibe, Mitsuhiro
    ADVANCED ROBOTICS, 2025, 39 (01) : 71 - 78
  • [47] A robust and energy-efficient data dissemination framework for wireless sensor networks
    Wei Liu
    Yanchao Zhang
    Wenjing Lou
    Yuguang Fang
    Wireless Networks, 2006, 12 : 465 - 479
  • [48] An Optimized Deep Learning Framework for Energy-Efficient Resource Allocation in UAV-Assisted Wireless Networks
    Tian, Yanan
    Khan, Adil
    Ahmad, Shabeer
    Mohsan, Syed Agha Hassnain
    Karim, Faten Khalid
    Hayat, Babar
    Mostafa, Samih M.
    IEEE ACCESS, 2025, 13 : 40632 - 40648
  • [49] Energy-Efficient Joint Task Assignment and Migration in Data Centers: A Deep Reinforcement Learning Approach
    Lou, Jiong
    Tang, Zhiqing
    Jia, Weijia
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (02): : 961 - 973
  • [50] Explore Deep Reinforcement Learning to Energy-Efficient Data Synchronism in 5G Self-Powered Sensor Networks
    Wu, Chunyi
    Zhao, Yanhua
    Xiao, Shan
    Gao, Chao
    IEEE SENSORS JOURNAL, 2023, 23 (18) : 20586 - 20595