Energy-Efficient Intelligence Sharing in Intelligence Networking-Empowered Edge Computing: A Deep Reinforcement Learning Approach

被引:0
|
作者
Xie, Junfeng [1 ]
Jia, Qingmin [2 ]
Chen, Youxing [1 ]
机构
[1] North Univ China, Sch Informat & Commun Engn, Taiyuan 030051, Peoples R China
[2] Purple Mt Labs, Nanjing 211111, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
基金
中国国家自然科学基金;
关键词
Intelligence sharing; intelligence networking; edge computing; TD3; CLIENT SELECTION; RESOURCE-ALLOCATION; WIRELESS NETWORKS; OPTIMIZATION; MANAGEMENT; BLOCKCHAIN; INTERNET; AI;
D O I
10.1109/ACCESS.2024.3469956
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Advanced artificial intelligence (AI) and multi-access edge computing (MEC) technologies facilitate the development of edge intelligence, enabling the intelligence learned from remote cloud to network edge. To achieve automatic decision-making, the training efficiency and accuracy of AI models are crucial for edge intelligence. However, the collected data volume of each network edge node is limited, which may cause the over-fitting of AI models. To improve the training efficiency and accuracy of AI models for edge intelligence, intelligence networking-empowered edge computing (INEEC) is a promising solution, which enables each network edge node to improve its AI models quickly and economically with the help of other network edge nodes' sharing of their learned intelligence. Sharing intelligence among network edge nodes efficiently is essential for INEEC. Thus in this paper, we study the intelligence sharing scheme, which aims to maximize the system energy efficiency while ensuring the latency tolerance via jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation. The system energy efficiency is defined as the ratio of model performance to energy consumption. Taking into account the dynamic characteristics of edge network conditions, the intelligence sharing problem is modeled as a Markov decision process (MDP). Subsequently, a twin delayed deep deterministic policy gradient (TD3)-based algorithm is designed to automatically make the optimal decisions. Finally, by extensive simulation experiments, it is shown that: 1) compared with DDPG and DQN, the proposed algorithm has a better convergence performance; 2) jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation helps to improve intelligence sharing efficiency; 3) under different parameter settings, the proposed algorithm achieves better results than the benchmark algorithms.
引用
收藏
页码:141639 / 141652
页数:14
相关论文
共 50 条
  • [1] An Energy-Efficient Intelligence Sharing Scheme in Intelligence Networking-Empowered Edge Computing
    Xie, Junfeng
    Jia, Qingmin
    Lu, Fengliang
    IEEE ACCESS, 2024, 12 : 90940 - 90951
  • [2] Collective Deep Reinforcement Learning for Intelligence Sharing in the Internet of Intelligence-Empowered Edge Computing
    Tang, Qinqin
    Xie, Renchao
    Yu, Fei Richard
    Chen, Tianjiao
    Zhang, Ran
    Huang, Tao
    Liu, Yunjie
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (11) : 6327 - 6342
  • [3] Neuromorphic Computing for Energy-Efficient Edge Intelligence
    Panda, Priya
    2024 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI TSA, 2024,
  • [4] Edge intelligence computing for mobile augmented reality with deep reinforcement learning approach
    Chen, Miaojiang
    Liu, Wei
    Wang, Tian
    Liu, Anfeng
    Zeng, Zhiwen
    Computer Networks, 2021, 195
  • [5] Edge intelligence computing for mobile augmented reality with deep reinforcement learning approach
    Chen, Miaojiang
    Liu, Wei
    Wang, Tian
    Liu, Anfeng
    Zeng, Zhiwen
    COMPUTER NETWORKS, 2021, 195
  • [6] Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge Intelligence
    Wei, Peng
    Guo, Kun
    Li, Ye
    Wang, Jue
    Feng, Wei
    Jin, Shi
    Ge, Ning
    Liang, Ying-Chang
    IEEE ACCESS, 2022, 10 : 65156 - 65192
  • [7] Deep Reinforcement Learning for Energy-Efficient Computation Offloading in Mobile-Edge Computing
    Zhou, Huan
    Jiang, Kai
    Liu, Xuxun
    Li, Xiuhua
    Leung, Victor C. M.
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (02): : 1517 - 1530
  • [8] Deep Reinforcement Learning-Based Energy-Efficient Edge Computing for Internet of Vehicles
    Kong, Xiangjie
    Duan, Gaohui
    Hou, Mingliang
    Shen, Guojiang
    Wang, Hui
    Yan, Xiaoran
    Collotta, Mario
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (09) : 6308 - 6316
  • [9] An Energy-Efficient Dynamic Offloading Algorithm for Edge Computing Based on Deep Reinforcement Learning
    Zhu, Keyu
    Li, Shaobo
    Zhang, Xingxing
    Wang, Jinming
    Xie, Cankun
    Wu, Fengbin
    Xie, Rongxiang
    IEEE ACCESS, 2024, 12 : 127489 - 127506
  • [10] Energy-efficient activity-driven computing architectures for edge intelligence
    Liu, Shih-Chii
    Gao, Chang
    Kim, Kwantae
    Delbruck, Tobi
    2022 INTERNATIONAL ELECTRON DEVICES MEETING, IEDM, 2022,