Proactive Caching With Distributed Deep Reinforcement Learning in 6G Cloud-Edge Collaboration Computing

被引:2
|
作者
Wu, Changmao [1 ]
Xu, Zhengwei [2 ]
He, Xiaoming [3 ]
Lou, Qi [1 ]
Xia, Yuanyuan [1 ]
Huang, Shuman [2 ]
机构
[1] Chinese Acad Sci, Inst Soft ware, Beijing 100190, Peoples R China
[2] Henan Normal Univ, Coll Comp & Informat Engn, Xinxiang 453007, Peoples R China
[3] Nanjing Univ Posts & Telecommun, Coll Internet Things, Nanjing 210049, Peoples R China
关键词
Costs; Training; 6G mobile communication; Servers; Predictive models; Generative adversarial networks; Optimization; 6G; distributed edge computing; proactive caching; deep reinforcement learning; multi-agent learning architecture; BLOCKCHAIN;
D O I
10.1109/TPDS.2024.3406027
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Proactive caching in 6G cloud-edge collaboration scenarios, intelligently and periodically updating the cached contents, can either alleviate the traffic congestion of backhaul link and edge cooperative link or bring multimedia services to mobile users. To further improve the network performance of 6G cloud-edge, we consider the issue of multi-objective joint optimization, i.e., maximizing edge hit ratio while minimizing content access latency and traffic cost. To solve this complex problem, we focus on the distributed deep reinforcement learning (DRL)-based method for proactive caching, including content prediction and content decision-making. Specifically, since the prior information of user requests is seldom available practically in the current time period, a novel method named temporal convolution sequence network (TCSN) based on the temporal convolution network (TCN) and attention model is used to improve the accuracy of content prediction. Furthermore, according to the value of content prediction, the distributional deep Q network (DDQN) seeks to build a distribution model on returns to optimize the policy of content decision-making. The generative adversarial network (GAN) is adapted in a distributed fashion, emphasizing learning the data distribution and generating compelling data across multiple nodes. In addition, the prioritized experience replay (PER) is helpful to learn from the most effective sample. So we propose a multivariate fusion algorithm called PG-DDQN. Finally, faced with such a complex scenario, a distributed learning architecture, i.e., multi-agent learning architecture is efficiently used to learn DRL-based methods in a manner of centralized training and distributed inference. The experiments prove that our proposal achieves satisfactory performance in terms of edge hit ratio, traffic cost and content access latency.
引用
收藏
页码:1387 / 1399
页数:13
相关论文
共 50 条
  • [21] Collaborative Edge Computing and Caching With Deep Reinforcement Learning Decision Agents
    Ren, Jianji
    Wang, Haichao
    Hou, Tingting
    Zheng, Shuai
    Tang, Chaosheng
    IEEE ACCESS, 2020, 8 : 120604 - 120612
  • [22] Federated Deep Reinforcement Learning for Recommendation-Enabled Edge Caching in Mobile Edge-Cloud Computing Networks
    Sun, Chuan
    Li, Xiuhua
    Wen, Junhao
    Wang, Xiaofei
    Han, Zhu
    Leung, Victor C. M.
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (03) : 690 - 705
  • [23] A Cloud-Edge Collaboration Solution for Distribution Network Reconfiguration Using Multi-Agent Deep Reinforcement Learning
    Gao, Hongjun
    Wang, Renjun
    He, Shuaijia
    Wang, Lingfeng
    Liu, Junyong
    Chen, Zhe
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2024, 39 (02) : 3867 - 3879
  • [24] A Deep Learning Based Efficient Data Transmission for Industrial Cloud-Edge Collaboration
    Wu, Yu
    Yang, Bo
    Li, Cheng
    Liu, Qi
    Liu, Yuxiang
    Zhu, Dafeng
    2022 IEEE 31ST INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), 2022, : 1202 - 1207
  • [25] Cloud-Edge Collaboration with Green Scheduling and Deep Learning for Industrial Internet of Things
    Cui, Yunfei
    Zhang, Heli
    Ji, Hong
    Li, Xi
    Shao, Xun
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [26] Cloud-Edge Training Architecture for Sim-to-Real Deep Reinforcement Learning
    Cao, Hongpeng
    Theile, Mirco
    Wyrwal, Federico G.
    Caccamo, Marco
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9363 - 9370
  • [27] Efficient End-Edge-Cloud Task Offloading in 6G Networks Based on Multiagent Deep Reinforcement Learning
    She, Hao
    Yan, Lixing
    Guo, Yongan
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 20260 - 20270
  • [28] Distributed Deep Learning at the Edge: A Novel Proactive and Cooperative Caching Framework for Mobile Edge Networks
    Saputra, Yuris Mulya
    Dinh Thai Hoang
    Nguyen, Diep N.
    Dutkiewicz, Eryk
    Niyato, Dusit
    Kim, Dong In
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2019, 8 (04) : 1220 - 1223
  • [29] A deep reinforcement learning approach towards distributed Function as a Service (FaaS) based edge application orchestration in cloud-edge continuum
    Khansari, Mina Emami
    Sharifian, Saeed
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2025, 233
  • [30] Deep Reinforcement Learning Based Cloud-Edge Collaborative Computation Offloading Mechanism
    Chen S.-G.
    Chen J.-M.
    Zhao C.-X.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2021, 49 (01): : 157 - 166