Vehicular edge cloud computing content caching optimization solution based on content prediction and deep reinforcement learning

被引:1
|
作者
Zhu, Lin [1 ]
Li, Bingxian [1 ]
Tan, Long [1 ]
机构
[1] Heilongjiang Univ, Sch Comp Sci & Technol, Harbin 150080, Peoples R China
基金
中国国家自然科学基金;
关键词
Edge computing; Internet of vehicles; Deep reinforcement learning; Informer; Task caching; RESOURCE-ALLOCATION; FRAMEWORKS; INTERNET; SYSTEMS;
D O I
10.1016/j.adhoc.2024.103643
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In conventional studies on vehicular edge computing, researchers frequently overlook the high-speed mobility of vehicles and the dynamic nature of the vehicular edge environment. Moreover, when employing deep reinforcement learning to address vehicular edge challenges, insufficient attention is given to the potential issue of the algorithm converging to a local optimal solution. This paper presents a content caching solution tailored for vehicular edge cloud computing, integrating content prediction and deep reinforcement learning techniques. Given the swift mobility of vehicles and the ever-changing nature of the vehicular edge environment, the study proposes a content prediction model based on Informer. Leveraging the Informer prediction model, the system anticipates the vehicular edge environment dynamics, thereby informing the caching of vehicle task content. Acknowledging the diverse time scales involved in policy decisions such as content updating, vehicle scheduling, and bandwidth allocation, the paper advocates a dual time-scale Markov modeling approach. Moreover, to address the local optimality issue inherent in the A3C algorithm, an enhanced A3C algorithm is introduced, incorporating an epsilon-greedy strategy to promote exploration. Recognizing the potential limitations posed by a fixed exploration rate epsilon, a dynamic baseline mechanism is proposed for updating epsilon dynamically. Experimental findings demonstrate that compared to alternative content caching approaches, the proposed vehicle edge computing content caching solution substantially mitigates content access costs. To support research in this area, we have publicly released the source code and pre-trained models at https://github.com/JYAyyyyyy/Informer.git.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Deep Reinforcement Learning Edge Workload Orchestrator for Vehicular Edge Computing
    Silva, Eliana Neuza
    da Silva, Fernando Mira
    2023 IEEE 9TH INTERNATIONAL CONFERENCE ON NETWORK SOFTWARIZATION, NETSOFT, 2023, : 44 - 52
  • [32] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu, Xi
    Huang, Yang
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [33] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu, Xi
    Huang, Yang
    PEERJ, 2022, 10
  • [34] Deep Reinforcement Learning-Based Computation Offloading in Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Min, Geyong
    Duan, Hancong
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [35] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu, Xi
    Huang, Yang
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [36] Deep-Reinforcement-Learning-Based Offloading Scheduling for Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Wang, Chao
    Min, Geyong
    Duan, Hancong
    Zhu, Qingxin
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (06) : 5449 - 5465
  • [37] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu X.
    Huang Y.
    PeerJ Computer Science, 2022, 8
  • [38] Deep Reinforcement Learning for Intelligent Computing and Content Edge Service in ICN-based IoV
    Li, Jingsong
    Tang, Junhua
    Li, Jianhua
    Zou, Futai
    2021 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2021,
  • [39] AI-Empowered Content Caching in Vehicular Edge Computing: Opportunities and Challenges
    Javed, Muhammad Awais
    Zeadally, Sherali
    IEEE NETWORK, 2021, 35 (03): : 109 - 115
  • [40] Deep Reinforcement Learning for Collaborative Edge Computing in Vehicular Networks
    Li, Mushu
    Gao, Jie
    Zhao, Lian
    Shen, Xuemin
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2020, 6 (04) : 1122 - 1135