Proactive Caching With Distributed Deep Reinforcement Learning in 6G Cloud-Edge Collaboration Computing

被引:2
|
作者
Wu, Changmao [1 ]
Xu, Zhengwei [2 ]
He, Xiaoming [3 ]
Lou, Qi [1 ]
Xia, Yuanyuan [1 ]
Huang, Shuman [2 ]
机构
[1] Chinese Acad Sci, Inst Soft ware, Beijing 100190, Peoples R China
[2] Henan Normal Univ, Coll Comp & Informat Engn, Xinxiang 453007, Peoples R China
[3] Nanjing Univ Posts & Telecommun, Coll Internet Things, Nanjing 210049, Peoples R China
关键词
Costs; Training; 6G mobile communication; Servers; Predictive models; Generative adversarial networks; Optimization; 6G; distributed edge computing; proactive caching; deep reinforcement learning; multi-agent learning architecture; BLOCKCHAIN;
D O I
10.1109/TPDS.2024.3406027
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Proactive caching in 6G cloud-edge collaboration scenarios, intelligently and periodically updating the cached contents, can either alleviate the traffic congestion of backhaul link and edge cooperative link or bring multimedia services to mobile users. To further improve the network performance of 6G cloud-edge, we consider the issue of multi-objective joint optimization, i.e., maximizing edge hit ratio while minimizing content access latency and traffic cost. To solve this complex problem, we focus on the distributed deep reinforcement learning (DRL)-based method for proactive caching, including content prediction and content decision-making. Specifically, since the prior information of user requests is seldom available practically in the current time period, a novel method named temporal convolution sequence network (TCSN) based on the temporal convolution network (TCN) and attention model is used to improve the accuracy of content prediction. Furthermore, according to the value of content prediction, the distributional deep Q network (DDQN) seeks to build a distribution model on returns to optimize the policy of content decision-making. The generative adversarial network (GAN) is adapted in a distributed fashion, emphasizing learning the data distribution and generating compelling data across multiple nodes. In addition, the prioritized experience replay (PER) is helpful to learn from the most effective sample. So we propose a multivariate fusion algorithm called PG-DDQN. Finally, faced with such a complex scenario, a distributed learning architecture, i.e., multi-agent learning architecture is efficiently used to learn DRL-based methods in a manner of centralized training and distributed inference. The experiments prove that our proposal achieves satisfactory performance in terms of edge hit ratio, traffic cost and content access latency.
引用
收藏
页码:1387 / 1399
页数:13
相关论文
共 50 条
  • [41] HOODIE: Hybrid Computation Offloading via Distributed Deep Reinforcement Learning in Delay-Aware Cloud-Edge Continuum
    Giannopoulos, Anastasios E.
    Paralikas, Ilias
    Spantideas, Sotirios T.
    Trakadas, Panagiotis
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2024, 5 : 7818 - 7841
  • [42] Task Offloading in Cloud-Edge Environments: A Deep-Reinforcement-Learning-Based Solution
    Wang, Suzhen
    Deng, Yongchen
    Hu, Zhongbo
    INTERNATIONAL JOURNAL OF DIGITAL CRIME AND FORENSICS, 2023, 15 (01)
  • [43] Cloud-Edge Collaborative SFC Mapping for Industrial IoT Using Deep Reinforcement Learning
    Xu, Siya
    Li, Yimin
    Guo, Shaoyong
    Lei, Chenghao
    Liu, Di
    Qiu, Xuesong
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (06) : 4158 - 4168
  • [44] A collaborative cloud-edge computing framework in distributed neural network
    Xu, Shihao
    Zhang, Zhenjiang
    Kadoch, Michel
    Cheriet, Mohamed
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)
  • [45] AI-Driven Proactive Content Caching for 6G
    Cheng, Guangquan
    Jiang, Chi
    Yue, Binglei
    Wang, Ranran
    Alzahrani, Bander
    Zhang, Yin
    IEEE WIRELESS COMMUNICATIONS, 2023, 30 (03) : 180 - 188
  • [46] Deep Reinforcement Learning Based Two-phase Proactive Caching for Collaborative Edge Networks
    Zhao, Ming
    Nakhai, Mohammad Reza
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [47] Deep reinforcement learning based multi-level dynamic reconfiguration for urban distribution network:a cloud-edge collaboration architecture
    Siyuan Jiang
    Hongjun Gao
    Xiaohui Wang
    Junyong Liu
    Kunyu Zuo
    GlobalEnergyInterconnection, 2023, 6 (01) : 1 - 14
  • [48] Federated Learning Based Proactive Content Caching in Edge Computing
    Yu, Zhengxin
    Hu, Jia
    Min, Geyong
    Lu, Haochuan
    Zhao, Zhiwei
    Wang, Haozhe
    Georgalas, Nektarios
    2018 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2018,
  • [49] A collaborative cloud-edge computing framework in distributed neural network
    Shihao Xu
    Zhenjiang Zhang
    Michel Kadoch
    Mohamed Cheriet
    EURASIP Journal on Wireless Communications and Networking, 2020
  • [50] Proactive Caching in the Edge-Cloud Continuum with Federated Learning
    Zyrianoff, Ivan
    Montecchiari, Leonardo
    Trotta, Angelo
    Gigli, Lorenzo
    Kamienski, Carlos
    Di Felice, Marco
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 234 - 240