Energy-Efficient Intelligence Sharing in Intelligence Networking-Empowered Edge Computing: A Deep Reinforcement Learning Approach

被引:0
|
作者
Xie, Junfeng [1 ]
Jia, Qingmin [2 ]
Chen, Youxing [1 ]
机构
[1] North Univ China, Sch Informat & Commun Engn, Taiyuan 030051, Peoples R China
[2] Purple Mt Labs, Nanjing 211111, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
基金
中国国家自然科学基金;
关键词
Intelligence sharing; intelligence networking; edge computing; TD3; CLIENT SELECTION; RESOURCE-ALLOCATION; WIRELESS NETWORKS; OPTIMIZATION; MANAGEMENT; BLOCKCHAIN; INTERNET; AI;
D O I
10.1109/ACCESS.2024.3469956
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Advanced artificial intelligence (AI) and multi-access edge computing (MEC) technologies facilitate the development of edge intelligence, enabling the intelligence learned from remote cloud to network edge. To achieve automatic decision-making, the training efficiency and accuracy of AI models are crucial for edge intelligence. However, the collected data volume of each network edge node is limited, which may cause the over-fitting of AI models. To improve the training efficiency and accuracy of AI models for edge intelligence, intelligence networking-empowered edge computing (INEEC) is a promising solution, which enables each network edge node to improve its AI models quickly and economically with the help of other network edge nodes' sharing of their learned intelligence. Sharing intelligence among network edge nodes efficiently is essential for INEEC. Thus in this paper, we study the intelligence sharing scheme, which aims to maximize the system energy efficiency while ensuring the latency tolerance via jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation. The system energy efficiency is defined as the ratio of model performance to energy consumption. Taking into account the dynamic characteristics of edge network conditions, the intelligence sharing problem is modeled as a Markov decision process (MDP). Subsequently, a twin delayed deep deterministic policy gradient (TD3)-based algorithm is designed to automatically make the optimal decisions. Finally, by extensive simulation experiments, it is shown that: 1) compared with DDPG and DQN, the proposed algorithm has a better convergence performance; 2) jointly optimizing intelligence requesting strategy, transmission power control and computation resource allocation helps to improve intelligence sharing efficiency; 3) under different parameter settings, the proposed algorithm achieves better results than the benchmark algorithms.
引用
收藏
页码:141639 / 141652
页数:14
相关论文
共 50 条
  • [31] Energy Efficient Joint Computation Offloading and Service Caching for Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Zhou, Huan
    Zhang, Zhenyu
    Wu, Yuan
    Dong, Mianxiong
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2023, 7 (02): : 950 - 961
  • [32] CamThings: IoT Camera with Energy-Efficient Communication by Edge Computing based on Deep Learning
    Lim, Jaebong
    Seo, Juhee
    Back, Yunju
    2018 28TH INTERNATIONAL TELECOMMUNICATION NETWORKS AND APPLICATIONS CONFERENCE (ITNAC), 2018, : 181 - 186
  • [33] Collaborative Multi-Agent Deep Reinforcement Learning for Energy-Efficient Resource Allocation in Heterogeneous Mobile Edge Computing Networks
    Xiao, Yang
    Song, Yuqian
    Liu, Jun
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (06) : 6653 - 6668
  • [34] Energy-Efficient Computation Offloading With DVFS Using Deep Reinforcement Learning for Time-Critical IoT Applications in Edge Computing
    Panda, Saroj Kumar
    Lin, Man
    Zhou, Ti
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (08) : 6611 - 6621
  • [35] EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing
    Li, Peisong
    Xiao, Ziren
    Wang, Xinheng
    Huang, Kaizhu
    Huang, Yi
    Gao, Honghao
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 1830 - 1846
  • [36] Energy-efficient Resource Allocation for UAV-empowered Mobile Edge Computing System
    Cheng, Yu
    Liao, Yangzhe
    Zhai, Xiaojun
    2020 IEEE/ACM 13TH INTERNATIONAL CONFERENCE ON UTILITY AND CLOUD COMPUTING (UCC 2020), 2020, : 408 - 413
  • [37] Joint Edge Association and Aggregation Frequency for Energy-Efficient Hierarchical Federated Learning by Deep Reinforcement Learning
    Ren, Yijing
    Wu, Changxiang
    So, Daniel K. C.
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3639 - 3645
  • [38] Energy-efficient deep learning inference on edge devices
    Daghero, Francesco
    Pagliari, Daniele Jahier
    Poncino, Massimo
    HARDWARE ACCELERATOR SYSTEMS FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, 2021, 122 : 247 - 301
  • [39] Deep Reinforcement Learning based Energy Scheduling for Edge Computing
    Yang, Qinglin
    Li, Peng
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 175 - 180
  • [40] Robotic Edge Intelligence for Energy-Efficient Human-Robot Collaboration
    Cai, Zhengying
    Du, Xiangyu
    Huang, Tianhao
    Lv, Tianrui
    Cai, Zhiheng
    Gong, Guoqiang
    SUSTAINABILITY, 2024, 16 (22)