QoE-Based Task Offloading With Deep Reinforcement Learning in Edge-Enabled Internet of Vehicles

被引:85
|
作者
He, Xiaoming [1 ]
Lu, Haodong [2 ]
Du, Miao [2 ]
Mao, Yingchi [1 ]
Wang, Kun [3 ]
机构
[1] Hohai Univ, Coll Comp & Informat, Nanjing 210098, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Coll Internet Things, Nanjing 210003, Peoples R China
[3] Univ Calif Los Angeles, Dept Elect & Comp Engn, Los Angeles, CA 90095 USA
基金
中国国家自然科学基金;
关键词
Task analysis; Quality of experience; Servers; Training; Computational modeling; Energy consumption; Convergence; Internet of vehicles (IoV); edge; task offloading; deep deterministic policy gradients (DDPG); QoE; RESOURCE-ALLOCATION;
D O I
10.1109/TITS.2020.3016002
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
In the transportation industry, task offloading services of edge-enabled Internet of Vehicles (IoV) are expected to provide vehicles with the better Quality of Experience (QoE). However, the various status of diverse edge servers and vehicles, as well as varying vehicular offloading modes, make a challenge of task offloading service. Therefore, to enhance the satisfaction of QoE, we first introduce a novel QoE model. Specifically, the emerging QoE model restricted by the energy consumption: 1) intelligent vehicles equipped with caching spaces and computing units may work as carriers; 2) various computational and caching capacities of edge servers can empower the offloading; and 3) unpredictable routings of the vehicles and edge servers can lead to diverse information transmission. We then propose an improved deep reinforcement learning (DRL) algorithm named PS-DDPG with the prioritized experience replay (PER) and the stochastic weight averaging (SWA) mechanisms based on deep deterministic policy gradients (DDPG) to seek an optimal offloading mode, saving energy consumption. Specifically, the PER scheme is proposed to enhance the availability of the experience replay buffer, thus accelerating the training. Moreover, reducing the noise in the training process and thus stabilizing the rewards, the SWA scheme is introduced to average weights. Extensive experiments certify the better performance, i.e., stability and convergence, of our PS-DDPG algorithm compared to existing work. Moreover, the experiments indicate that the QoE value can be improved by the proposed algorithm.
引用
收藏
页码:2252 / 2261
页数:10
相关论文
共 50 条
  • [31] Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles
    Jin, Hulin
    Kim, Yong-Guk
    Jin, Zhiran
    Fan, Chunyang
    Xu, Yonglong
    JOURNAL OF GRID COMPUTING, 2024, 22 (01)
  • [32] Joint Task Offloading Based on Distributed Deep Reinforcement Learning-Based Genetic Optimization Algorithm for Internet of Vehicles
    Hulin Jin
    Yong-Guk Kim
    Zhiran Jin
    Chunyang Fan
    Yonglong Xu
    Journal of Grid Computing, 2024, 22
  • [33] Research on Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing
    Lu H.
    Gu C.
    Luo F.
    Ding W.
    Yang T.
    Zheng S.
    Gu, Chunhua (chgu@ecust.edu.cn), 1600, Science Press (57): : 1539 - 1554
  • [34] Collaborative Task Offloading Based on Deep Reinforcement Learning in Heterogeneous Edge Networks
    Du, Yupeng
    Huang, Zhenglei
    Yang, Shujie
    Xiao, Han
    20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024, 2024, : 375 - 380
  • [35] Task offloading of edge computing network based on Lyapunov and deep reinforcement learning
    Qiao, Xudong
    Zhou, Yongxin
    2024 9TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATION SYSTEMS, ICCCS 2024, 2024, : 1054 - 1059
  • [36] Task Offloading Optimization in Mobile Edge Computing based on Deep Reinforcement Learning
    Silva, Carlos
    Magaia, Naercio
    Grilo, Antonio
    PROCEEDINGS OF THE INT'L ACM CONFERENCE ON MODELING, ANALYSIS AND SIMULATION OF WIRELESS AND MOBILE SYSTEMS, MSWIM 2023, 2023, : 109 - 118
  • [37] Dynamic task offloading for Internet of Things in mobile edge computing via deep reinforcement learning
    Chen, Ying
    Gu, Wei
    Li, Kaixin
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2022,
  • [38] Joint Caching and Computing Service Placement for Edge-Enabled IoT Based on Deep Reinforcement Learning
    Chen, Yan
    Sun, Yanjing
    Yang, Bin
    Taleb, Tarik
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (19) : 19501 - 19514
  • [39] A DQN-Based Frame Aggregation and Task Offloading Approach for Edge-Enabled IoMT
    Yuan, Xiaoming
    Zhang, Zedan
    Feng, Chujun
    Cui, Yejia
    Garg, Sahil
    Kaddoum, Georges
    Yu, Keping
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (03): : 1339 - 1351
  • [40] Deep Reinforcement Learning Based Computation Offloading in Fog Enabled Industrial Internet of Things
    Ren, Yijing
    Sun, Yaohua
    Peng, Mugen
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (07) : 4978 - 4987