Task Offloading and Trajectory Optimization in UAV Networks: A Deep Reinforcement Learning Method Based on SAC and A-Star

被引:0
|
作者
Liu, Jianhua [1 ]
Xie, Peng [1 ]
Liu, Jiajia [1 ]
Tu, Xiaoguang [1 ]
机构
[1] Civil Aviat Flight Univ China, Inst Elect & Elect Engn, Deyang 618307, Peoples R China
来源
基金
中国博士后科学基金;
关键词
Mobile edge computing; SAC; communication security; A; -Star; UAV; RESOURCE;
D O I
10.32604/cmes.2024.054002
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
In mobile edge computing, unmanned aerial vehicles (UAVs) equipped with computing servers have emerged as a promising solution due to their exceptional attributes of high mobility, flexibility, rapid deployment, and terrain agnosticism. These attributes enable UAVs to reach designated areas, thereby addressing temporary computing swiftly in scenarios where ground-based servers are overloaded or unavailable. However, the inherent broadcast nature of line-of-sight transmission methods employed by UAVs renders them vulnerable to eavesdropping attacks. Meanwhile, there are often obstacles that affect flight safety in real UAV operation areas, and collisions between UAVs may also occur. To solve these problems, we propose an innovative A & lowast;SAC & lowast; SAC deep reinforcement learning algorithm, which seamlessly integrates the benefits of Soft Actor-Critic (SAC) and A & lowast; & lowast; (A-Star) algorithms. This algorithm jointly optimizes the hovering position and task offloading proportion of the UAV through a task offloading function. Furthermore, our algorithm incorporates a path-planning function that identifies the most energy-efficient route for the UAV to reach its optimal hovering point. This approach not only reduces the flight energy consumption of the UAV but also lowers overall energy consumption, thereby optimizing system-level energy efficiency. Extensive simulation results demonstrate that, compared to other algorithms, our approach achieves superior system benefits. Specifically, it exhibits an average improvement of 13.18% in terms of different computing task sizes, 25.61% higher on average in terms of the power of electromagnetic wave interference intrusion into UAVs emitted by different auxiliary UAVs, and 35.78% higher on average in terms of the maximum computing frequency of different auxiliary UAVs. As for path planning, the simulation results indicate that our algorithm is capable of determining the optimal collision-avoidance path for each auxiliary UAV, enabling them to safely reach their designated endpoints in diverse obstacle-ridden environments.
引用
收藏
页码:1243 / 1273
页数:31
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Task Offloading in UAV-Aided Smart Farm Networks
    Nguyen, Anne Catherine
    Pamuklu, Turgay
    Syed, Aisha
    Kennedy, W. Sean
    Erol-Kantarci, Melike
    2022 IEEE FUTURE NETWORKS WORLD FORUM, FNWF, 2022, : 270 - 275
  • [2] Deep Reinforcement Learning Assisted UAV Trajectory and Resource Optimization for NOMA Networks
    Chen, Peixin
    Zhao, Jian
    Shen, Furao
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 933 - 938
  • [3] A Deep Reinforcement Learning Based UAV Trajectory Planning Method For Integrated Sensing And Communications Networks
    Lin, Heyun
    Zhang, Zhihai
    Wei, Longkun
    Zhou, Zihao
    Zheng, Tian
    2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [4] Computing Over the Sky: Joint UAV Trajectory and Task Offloading Scheme Based on Optimization-Embedding Multi-Agent Deep Reinforcement Learning
    Li, Xuanheng
    Du, Xinyang
    Zhao, Nan
    Wang, Xianbin
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (03) : 1355 - 1369
  • [5] Research on task offloading optimization strategies for vehicular networks based on game theory and deep reinforcement learning
    Wang, Lei
    Zhou, Wenjiang
    Xu, Haitao
    Li, Liang
    Cai, Lei
    Zhou, Xianwei
    FRONTIERS IN PHYSICS, 2023, 11
  • [6] Deep Reinforcement Learning Based 3D-Trajectory Design and Task Offloading in UAV-Enabled MEC System
    Liu, Chuanjie
    Zhong, Yalin
    Wu, Ruolin
    Ren, Siyu
    Du, Shuang
    Guo, Bing
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (02) : 3185 - 3195
  • [7] Task Offloading and Trajectory Control for UAV-Assisted Mobile Edge Computing Using Deep Reinforcement Learning
    Zhang, Lu
    Zhang, Zi-Yan
    Min, Luo
    Tang, Chao
    Zhang, Hong-Ying
    Wang, Ya-Hong
    Cai, Peng
    IEEE ACCESS, 2021, 9 : 53708 - 53719
  • [8] Task Offloading Optimization in Mobile Edge Computing based on Deep Reinforcement Learning
    Silva, Carlos
    Magaia, Naercio
    Grilo, Antonio
    PROCEEDINGS OF THE INT'L ACM CONFERENCE ON MODELING, ANALYSIS AND SIMULATION OF WIRELESS AND MOBILE SYSTEMS, MSWIM 2023, 2023, : 109 - 118
  • [9] Collaborative Task Offloading Based on Deep Reinforcement Learning in Heterogeneous Edge Networks
    Du, Yupeng
    Huang, Zhenglei
    Yang, Shujie
    Xiao, Han
    20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024, 2024, : 375 - 380
  • [10] Asynchronous Federated Deep-Reinforcement-Learning-Based Dependency Task Offloading for UAV-Assisted Vehicular Networks
    Shen, Si
    Shen, Guojiang
    Dai, Zhehao
    Zhang, Kaiyu
    Kong, Xiangjie
    Li, Jianxin
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (19): : 31561 - 31574