Dynamic deployment method based on double deep Q-network in UAV-assisted MEC systems

被引:0
|
作者
Suqin Zhang
Lin Zhang
Fei Xu
Song Cheng
Weiya Su
Sen Wang
机构
[1] Xi’an Technological University,School of Basic
[2] Xi’an Technological University,School of Ordnance Science and Technology
[3] Xi’an Technological University,School of Computer Science and Engineering
来源
关键词
Dynamic deployment; Unmanned aerial vehicle (UAV); Mobile edge computing (MEC); Double deep Q-network;
D O I
暂无
中图分类号
学科分类号
摘要
The unmanned aerial vehicle (UAV) assisted mobile edge computing (MEC) system leverages the high maneuverability of UAVs to provide efficient computing services to terminals. A dynamic deployment algorithm based on double deep Q-networks (DDQN) is suggested to address issues with energy limitation and obstacle avoidance when providing edge services to terminals by UAV. First, the energy consumption of the UAV and the fairness of the terminal’s geographic location are jointly optimized in the case of multiple obstacles and multiple terminals on the ground. And the UAV can avoid obstacles. Furthermore, a double deep Q-network was introduced to address the slow convergence and risk of falling into local optima during the optimization problem training process. Also included in the learning process was a pseudo count exploration strategy. Finally, the improved DDQN algorithm achieves faster convergence and a higher average system reward, according to experimental results. Regarding the fairness of geographic locations of terminals, the improved DDQN algorithm outperforms Q-learning, DQN, and DDQN algorithms by 50%, 20%, and 15.38%, respectively, and the stability of the improved algorithm is also validated.
引用
收藏
相关论文
共 50 条
  • [41] UAV-assisted MEC offloading strategy with peak AOI boundary optimization: A method based on DDQN
    Chen, Zhixiong
    Yang, Jiawei
    Zhou, Zhenyu
    Digital Communications and Networks, 2024, 10 (06) : 1790 - 1803
  • [42] Twice Sampling Method in Deep Q-network
    Zhao Y.-N.
    Liu P.
    Zhao W.
    Tang X.-L.
    Zidonghua Xuebao/Acta Automatica Sinica, 2019, 45 (10): : 1870 - 1882
  • [43] Accurate Price Prediction by Double Deep Q-Network
    Feizi-Derakhshi, Mohammad-Reza
    Lotfimanesh, Bahram
    Amani, Omid
    INTELIGENCIA ARTIFICIAL-IBEROAMERICAN JOURNAL OF ARTIFICIAL INTELLIGENCE, 2024, 27 (74): : 12 - 21
  • [44] Dynamic Path Planning Scheme for OHT in AMHS Based on Map Information Double Deep Q-Network
    Ao, Qi
    Zhou, Yue
    Guo, Wei
    Wang, Wenguang
    Ye, Ying
    ELECTRONICS, 2024, 13 (22)
  • [45] Joint Energy and AoI Optimization in UAV-Assisted MEC-WET Systems
    Yang, Yulu
    Song, Tiecheng
    Yang, Jingce
    Xu, Han
    Xing, Song
    IEEE SENSORS JOURNAL, 2024, 24 (09) : 15110 - 15124
  • [46] Deep Reinforcement Learning for Task Offloading and Power Allocation in UAV-Assisted MEC System
    Zhao, Nan
    Ren, Fan
    Du, Wei
    Ye, Zhiyang
    INTERNATIONAL JOURNAL OF MOBILE COMPUTING AND MULTIMEDIA COMMUNICATIONS, 2021, 12 (04) : 32 - 51
  • [47] Dynamic Parallel Machine Scheduling With Deep Q-Network
    Liu, Chien-Liang
    Tseng, Chun-Jan
    Huang, Tzu-Hsuan
    Wang, Jhih-Wun
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (11): : 6792 - 6804
  • [48] Multi-Task Matching Mechanism Design for UAV-Assisted MEC Network With Blockchain
    Wei, Menghan
    Xu, Kaijun
    IEEE ACCESS, 2023, 11 : 128681 - 128696
  • [49] Joint Deployment and Task Computation of UAVs in UAV-assisted Edge Computing Network
    Chen, Yuqing
    Zheng, Zhaohui
    APNOMS 2020: 2020 21ST ASIA-PACIFIC NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM (APNOMS), 2020, : 413 - 416
  • [50] Deep Reinforcement Learning Pairs Trading with a Double Deep Q-Network
    Brim, Andrew
    2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2020, : 222 - 227