Dynamic Task Offloading in MEC-Enabled IoT Networks: A Hybrid DDPG-D3QN Approach

被引:10
|
作者
Hu, Han [1 ,2 ]
Wu, Dingguo [1 ,2 ]
Zhou, Fuhui [3 ]
Jin, Shi [4 ]
Hu, Rose Qingyang [5 ]
机构
[1] Nanjing Univ Posts & Telecommun, Jiangsu Key Lab Wireless Commun, Nanjing 210000, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Jiangsu Key Lab Broadband Wireless Commun & Inter, Nanjing 210000, Peoples R China
[3] Nanjing Univ Aeronaut & Astronaut, tColl Elect & Informat Engn, Nanjing 210000, Peoples R China
[4] Southeast Univ, Natl Mobile Commun Res Lab, Nanjing, Peoples R China
[5] Utah State Univ, Dept Elect & Comp Engn, Logan, UT 84322 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Mobile edge computing (MEC); dynamic offloading; deep reinforcement learning; Internet of Things (IoT);
D O I
10.1109/GLOBECOM46510.2021.9685906
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile edge computing (MEC) has recently emerged as an enabling technology to support computation-intensive and delay-critical applications for energy-constrained and computation-limited Internet of Things (IoT). Due to the time-varying channels and dynamic task patterns, there exist many challenges to make efficient and effective computation offloading decisions, especially in the multi-server multi-user IoT networks, where the decisions involve both continuous and discrete actions. In this paper, we investigate computation task offloading in a dynamic environment and formulate a task offloading problem to minimize the average long-term service cost in terms of power consumption and buffering delay. To enhance the estimation of the long-term cost, we propose a deep reinforcement learning based algorithm, where deep deterministic policy gradient (DDPG) and dueling double deep Q networks (D3QN) are invoked to tackle continuous and discrete action domains, respectively. Simulation results validate that the proposed DDPG-D3QN algorithm exhibits better stability and faster convergence than the existing methods, and the average system service cost is decreased obviously.
引用
收藏
页数:6
相关论文
共 37 条
  • [31] Dynamic Task Offloading Approach for Task Delay Reduction in the IoT-enabled Fog Computing Systems
    Hoa Tran-Dang
    Kim, Dong-Seong
    2022 IEEE 20TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), 2022, : 61 - 66
  • [32] A Hybrid Deep Reinforcement Learning Approach for Dynamic Task Offloading in NOMA-MEC System
    Shang, Ce
    Sun, Yan
    Luo, Hong
    2022 19TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2022, : 434 - 442
  • [33] Joint Optimization of Flying Trajectory and Task Offloading for UAV-enabled MEC Networks: A Digital Twin-Assisted Hybrid Learning Approach
    Wu, Jiaqi
    Luo, Jingjing
    Wang, Tong
    Gao, Lin
    2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,
  • [34] Deep Reinforcement Learning Based 3D-Trajectory Design and Task Offloading in UAV-Enabled MEC System
    Liu, Chuanjie
    Zhong, Yalin
    Wu, Ruolin
    Ren, Siyu
    Du, Shuang
    Guo, Bing
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (02) : 3185 - 3195
  • [35] D3PG: Dirichlet DDPG for Task Partitioning and Offloading With Constrained Hybrid Action Space in Mobile-Edge Computing
    Ale, Laha
    King, Scott A.
    Zhang, Ning
    Sattar, Abdul Rahman
    Skandaraniyam, Janahan
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19) : 19260 - 19272
  • [36] Blockchain-Enabled Task Offloading With Energy Harvesting in Multi-UAV-Assisted IoT Networks: A Multi-Agent DRL Approach
    Seid, Abegaz Mohammed
    Lu, Jianfeng
    Abishu, Hayla Nahom
    Ayall, Tewodros Alemu
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (12) : 3517 - 3532
  • [37] A novel energy-efficient and cost-effective task offloading approach for UAV-enabled MEC with LEO enhancement in Internet of Remote Things networks
    Rahmani, Amir Masoud
    Haider, Amir
    Alsubai, Shtwai
    Alqahtani, Abdullah
    Alanazi, Abed
    Hosseinzadeh, Mehdi
    SIMULATION MODELLING PRACTICE AND THEORY, 2024, 137