LFDC: Low-Energy Federated Deep Reinforcement Learning for Caching Mechanism in Cloud-Edge Collaborative

被引:2
|
作者
Zhang, Xinyu [1 ]
Hu, Zhigang [1 ]
Zheng, Meiguang [1 ]
Liang, Yang [1 ,2 ]
Xiao, Hui [1 ]
Zheng, Hao [1 ]
Xu, Aikun [1 ]
机构
[1] Cent South Univ, Sch Comp Sci, Changsha 410083, Peoples R China
[2] Hunan Univ Chinese Med, Sch Informat, Changsha 410083, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 10期
基金
中国国家自然科学基金;
关键词
cloud-edge collaborative environments; caching strategies; deep reinforcement learning (DRL); low-energy federated deep reinforcement learning strategy for caching mechanisms (LFDC); energy efficiency; NETWORKS; POLICIES;
D O I
10.3390/app13106115
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The optimization of caching mechanisms has long been a crucial research focus in cloud-edge collaborative environments. Effective caching strategies can substantially enhance user experience quality in these settings. Deep reinforcement learning (DRL), with its ability to perceive the environment and develop intelligent policies online, has been widely employed for designing caching strategies. Recently, federated learning, when combined with DRL, has been in gaining popularity for optimizing caching strategies and protecting data training privacy from eavesdropping attacks. However, online federated deep reinforcement learning algorithms face high environmental dynamics, and real-time training can result in increased training energy consumption despite improving caching efficiency. To address this issue, we propose a low-energy federated deep reinforcement learning strategy for caching mechanisms (LFDC) that balances caching efficiency and training energy consumption. The LFDC strategy encompasses a novel energy efficiency model, a deep reinforcement learning mechanism, and a dynamic energy-saving federated policy. Our experimental results demonstrate that the proposed LFDC strategy significantly outperforms existing benchmarks in terms of energy efficiency.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Real-time Surveillance Video Salient Object Detection Using Collaborative Cloud-Edge Deep Reinforcement Learning
    Hou, Biao
    Zhang, Junxing
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [42] Federated Reinforcement Learning with Adaptive Training Times for Edge Caching
    Shaoshuai Fan
    Liyun Hu
    Hui Tian
    ChinaCommunications, 2022, 19 (08) : 57 - 72
  • [43] Federated Reinforcement Learning with Adaptive Training Times for Edge Caching
    Fan, Shaoshuai
    Hu, Liyun
    Tian, Hui
    CHINA COMMUNICATIONS, 2022, 19 (08) : 57 - 72
  • [44] Deep Reinforcement Learning Based Resource Allocation Strategy in Cloud-Edge Computing System
    Xu, Zhuohan
    Zhong, Zeheng
    Shi, Bing
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [45] Task Offloading in Cloud-Edge Environments: A Deep-Reinforcement-Learning-Based Solution
    Wang, Suzhen
    Deng, Yongchen
    Hu, Zhongbo
    INTERNATIONAL JOURNAL OF DIGITAL CRIME AND FORENSICS, 2023, 15 (01)
  • [46] Multi-Agent Deep Reinforcement Learning for Cooperative Offloading in Cloud-Edge Computing
    Suzuki, Akito
    Kobayashi, Masahiro
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 3660 - 3666
  • [47] Deep Reinforcement Learning Based Resource Allocation Strategy in Cloud-Edge Computing System
    Xu, Jianqiao
    Xu, Zhuohan
    Shi, Bing
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2022, 10
  • [48] PPVerifier: A Privacy-Preserving and Verifiable Federated Learning Method in Cloud-Edge Collaborative Computing Environment
    Lin, Li
    Zhang, Xiaoying
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (10) : 8878 - 8892
  • [49] Deep Reinforcement Learning for Energy-Efficient Edge Caching in Mobile Edge Networks
    Deng, Meng
    Huan, Zhou
    Kai, Jiang
    Zheng, Hantong
    Yue, Cao
    Peng, Chen
    CHINA COMMUNICATIONS, 2024, : 1 - 14
  • [50] Deep Reinforcement Learning for Energy-Efficient Edge Caching in Mobile Edge Networks
    Meng Deng
    Zhou Huan
    Jiang Kai
    Zheng Hantong
    Cao Yue
    Chen Peng
    China Communications, 2024, 21 (11) : 243 - 256