Energy-Efficient Intelligence Sharing in Intelligence Networking-Empowered Edge Computing: A Deep Reinforcement Learning Approach

被引:0
|
作者
Xie, Junfeng [1 ]
Jia, Qingmin [2 ]
Chen, Youxing [1 ]
机构
[1] North Univ China, Sch Informat & Commun Engn, Taiyuan 030051, Peoples R China
[2] Purple Mt Labs, Nanjing 211111, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
基金
中国国家自然科学基金;
关键词
Intelligence sharing; intelligence networking; edge computing; TD3; CLIENT SELECTION; RESOURCE-ALLOCATION; WIRELESS NETWORKS; OPTIMIZATION; MANAGEMENT; BLOCKCHAIN; INTERNET; AI;
D O I
10.1109/ACCESS.2024.3469956
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Advanced artificial intelligence (AI) and multi-access edge computing (MEC) technologies facilitate the development of edge intelligence, enabling the intelligence learned from remote cloud to network edge. To achieve automatic decision-making, the training efficiency and accuracy of AI models are crucial for edge intelligence. However, the collected data volume of each network edge node is limited, which may cause the over-fitting of AI models. To improve the training efficiency and accuracy of AI models for edge intelligence, intelligence networking-empowered edge computing (INEEC) is a promising solution, which enables each network edge node to improve its AI models quickly and economically with the help of other network edge nodes' sharing of their learned intelligence. Sharing intelligence among network edge nodes efficiently is essential for INEEC. Thus in this paper, we study the intelligence sharing scheme, which aims to maximize the system energy efficiency while ensuring the latency tolerance via jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation. The system energy efficiency is defined as the ratio of model performance to energy consumption. Taking into account the dynamic characteristics of edge network conditions, the intelligence sharing problem is modeled as a Markov decision process (MDP). Subsequently, a twin delayed deep deterministic policy gradient (TD3)-based algorithm is designed to automatically make the optimal decisions. Finally, by extensive simulation experiments, it is shown that: 1) compared with DDPG and DQN, the proposed algorithm has a better convergence performance; 2) jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation helps to improve intelligence sharing efficiency; 3) under different parameter settings, the proposed algorithm achieves better results than the benchmark algorithms.
引用
收藏
页码:141639 / 141652
页数:14
相关论文
共 50 条
  • [21] Utility Optimization for Blockchain Empowered Edge Computing with Deep Reinforcement Learning
    Nguyen, Dinh C.
    Ding, Ming
    Pathirana, Pubudu N.
    Seneviratne, Aruna
    Li, Jun
    Poor, H. Vincent
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [22] Reinforcement Learning Based Energy-Efficient Collaborative Inference for Mobile Edge Computing
    Xiao, Yilin
    Xiao, Liang
    Wan, Kunpeng
    Yang, Helin
    Zhang, Yi
    Wu, Yi
    Zhang, Yanyong
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (02) : 864 - 876
  • [23] Energy-Efficient Artificial Intelligence of Things With Intelligent Edge
    Zhu, Sha
    Ota, Kaoru
    Dong, Mianxiong
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (10): : 7525 - 7532
  • [24] Energy-efficient collaborative task offloading in multi-access edge computing based on deep reinforcement learning
    Wang, Shudong
    Zhao, Shengzhe
    Gui, Haiyuan
    He, Xiao
    Lu, Zhi
    Chen, Baoyun
    Fan, Zixuan
    Pang, Shanchen
    AD HOC NETWORKS, 2025, 169
  • [25] In-Memory Computing: Towards Energy-Efficient Artificial Intelligence
    Le Gallo, Manuel
    Sebastian, Abu
    Eleftheriou, Evangelos
    ERCIM NEWS, 2018, (115): : 44 - 45
  • [26] Energy-Efficient Edge Intelligence: A Comparative Analysis of AIoT Technologies
    Jevremovic, Aleksandar
    Kostic, Zona
    Perakovic, Dragan
    MOBILE NETWORKS & APPLICATIONS, 2024, 29 (01): : 147 - 155
  • [27] Delay-Aware and Energy-Efficient Computation Offloading in Mobile-Edge Computing Using Deep Reinforcement Learning
    Ale, Laha
    Zhang, Ning
    Fang, Xiaojie
    Chen, Xianfu
    Wu, Shaohua
    Li, Longzhuang
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2021, 7 (03) : 881 - 892
  • [28] Towards Energy-efficient Federated Edge Intelligence for IoT Networks
    Wang, Qu
    Xiao, Yong
    Zhu, Huixiang
    Sun, Zijian
    Li, Yingyu
    Ge, Xiaohu
    2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS WORKSHOPS (ICDCSW 2021), 2021, : 55 - 62
  • [29] Deep Reinforcement Learning for Energy-Efficient Task Offloading in Cooperative Vehicular Edge Networks
    Agbaje, Paul
    Nwafor, Ebelechukwu
    Olufowobi, Habeeb
    2023 IEEE 21ST INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS, INDIN, 2023,
  • [30] Reinforcement Learning for Energy-efficient Edge Caching in Mobile Edge Networks
    Zheng, Hantong
    Zhou, Huan
    Wang, Ning
    Chen, Peng
    Xu, Shouzhi
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,