Joint routing and computation offloading based deep reinforcement learning for Flying Ad hoc Networks

被引:0
|
作者
Lin, Na [1 ]
Huang, Jinjiao [1 ]
Hawbani, Ammar [1 ]
Zhao, Liang [1 ]
Tang, Hailun [1 ]
Guan, Yunchong [1 ]
Sun, Yunhe [1 ]
机构
[1] Shenyang Aerosp Univ, Sch Comp Sci, Shenyang, Peoples R China
基金
中国国家自然科学基金;
关键词
Unmanned Aerial Vehicles (UAVs); Computation offloading; Routing; Flying Ad-hoc Networks (FANETs); RESOURCE-ALLOCATION; UAV; OPTIMIZATION; DESIGN;
D O I
10.1016/j.comnet.2024.110514
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Flying ad -hoc networks (FANETs) consisting of multiple Unmanned Aerial Vehicles (UAVs) are widely used due to their flexibility and low cost. In scenarios such as crowdsensing and data collection, data collected by UAVs are transmitted to base stations for processing and then sent to data centers. Still, the deployment of base stations is costly and inflexible. To address this issue, this paper introduces a position -based Computing First Routing (CFR) protocol designed for efficient task transmission and computation offloading in FANETs. This protocol facilitates task processing during data transfer and ensures the delivery of fully processed results to the data center. Considering the dynamically changing topology of the FANETs and the uneven distribution of the UAVs' computation power, deep reinforcement learning is used to make multi -objective decisions based on the Q -values computed by the model. FANETs are centerless clusters, and two -hop neighbor tables containing position and computing power information are used to make less costly decisions. Simulation experiments demonstrate that CFR outperforms other benchmark schemes with an approximately 6% higher packet delivery rate, an approximately 21% reduction in end -to -end delay, and about a 34% decrease in total cost. Furthermore, it effectively ensures the completion of task offloading before reaching the destination node. This occurs due to the design of a hierarchical reward function that takes into account dynamic changes in delay and energy consumption, as well as the injection of neighbor computing power information into the two -hop neighbor table.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] A Communication Model based Offloading Decision for Flying Ad-hoc Networks
    Min, Hong
    Jung, Jinman
    Kim, Bongjae
    Heo, Junyoung
    PROCEEDINGS OF THE 2018 CONFERENCE ON RESEARCH IN ADAPTIVE AND CONVERGENT SYSTEMS (RACS 2018), 2018, : 134 - 135
  • [22] Reinforcement Learning Based Mobility Adaptive Routing for Vehicular Ad-Hoc Networks
    Wu, Jinqiao
    Fang, Min
    Li, Xiao
    WIRELESS PERSONAL COMMUNICATIONS, 2018, 101 (04) : 2143 - 2171
  • [23] An Intersection-Based QoS Routing for Vehicular Ad Hoc Networks With Reinforcement Learning
    Rui, Lanlan
    Yan, Zhibo
    Tan, Zuoyan
    Gao, Zhipeng
    Yang, Yang
    Chen, Xingyu
    Liu, Huiyong
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (09) : 9068 - 9083
  • [24] Reinforcement Learning Based Mobility Adaptive Routing for Vehicular Ad-Hoc Networks
    Jinqiao Wu
    Min Fang
    Xiao Li
    Wireless Personal Communications, 2018, 101 : 2143 - 2171
  • [25] A Survey of Reinforcement Learning Based Routing Protocols for Mobile Ad-Hoc Networks
    Chettibi, Saloua
    Chikhi, Salim
    RECENT TRENDS IN WIRELESS AND MOBILE NETWORKS, 2011, 162 : 1 - 13
  • [26] A Reinforcement Learning-based Routing Scheme for Cognitive Radio Ad Hoc Networks
    Al-Rawi, Hasan A. A.
    Yau, Kok-Lim Alvin
    Mohamad, Hafizal
    Ramli, Nordin
    Hashim, Wahidah
    2014 7TH IFIP WIRELESS AND MOBILE NETWORKING CONFERENCE (WMNC), 2014,
  • [27] Deep Reinforcement Learning Based Computation Offloading in SWIPT-assisted MEC Networks
    Wan, Changwei
    Guo, Songtao
    Yang, Yuanyuan
    2022 31ST INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2022), 2022,
  • [28] Flying Ad-Hoc Network Covert Communications with Deep Reinforcement Learning
    Li, Zonglin
    Wang, Jingjing
    Chen, Jianrui
    Fang, Zhengru
    Ren, Yong
    IEEE WIRELESS COMMUNICATIONS, 2024, 31 (05) : 117 - 125
  • [29] Deep Reinforcement Learning for Computation Offloading and Caching in Fog-Based Vehicular Networks
    Lan, Dapeng
    Taherkordi, Amir
    Eliassen, Frank
    Liu, Lei
    2020 IEEE 17TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2020), 2020, : 622 - 630
  • [30] A Q-learning-based smart clustering routing method in flying Ad Hoc networks
    Hosseinzadeh, Mehdi
    Tanveer, Jawad
    Rahmani, Amir Masoud
    Aurangzeb, Khursheed
    Yousefpoor, Efat
    Yousefpoor, Mohammad Sadegh
    Darwesh, Aso
    Lee, Sang-Woong
    Fazlali, Mahmood
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (01)