Communication-Efficient Federated Double Distillation in IoV

被引:3
|
作者
Yang, Peng [1 ,2 ]
Yan, Mengjiao [1 ,2 ]
Cui, Yaping [1 ,2 ]
He, Peng [1 ,2 ]
Wu, Dapeng [1 ,2 ]
Wang, Ruyan [1 ,2 ]
Chen, Luo [1 ,2 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Adv Network & Intelligent Connect Technol, Key Lab Chongqing Educ Commiss China, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Ubiquitous Sensing & Networking, Chongqing 400065, Peoples R China
关键词
Training; Servers; Federated learning; Data models; Computational modeling; Vehicle dynamics; Technological innovation; Vehicular edge computing; federated learning; communication efficiency; knowledge distillation; SELECTION; INTERNET;
D O I
10.1109/TCCN.2023.3286665
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
6G will be the next horizon from connected people and things to intelligence-of-everything. Machine learning (ML) combines artificial intelligence with vehicular edge computing (VEC), further releasing the potential of data. Federated learning (FL) is used to manage the collaborative training of vehicular clients and the road side unit (RSU), which can efficiently complete the distributed data processing. However, the vehicular clients upload a large number of model parameters to occupy regular communication resources, and the stability of the link cannot be guaranteed due to the mobility of the vehicles, which may lead to expensive communication overhead in VEC. Thus, this paper proposes a communication-efficient federated double distillation (FedDD) framework, which comprehensively considers the three-dimensional attributes of the vehicles and dynamically selects the cluster-heads (CHs) to improve the transmission efficiency. Then, knowledge distillation is further integrated into the federated learning to accomplish multiple rounds of parameter distillation, thereby significantly reducing the communication overhead. Experimental results show that compared to traditional FedAvg, FedDD can reduce communication overhead by three orders of magnitude. The communication overhead of FedDD is reduced by 82% compared to FTTQ, which improves the communication efficiency of FL while sacrificing a small amount of accuracy.
引用
收藏
页码:1340 / 1352
页数:13
相关论文
共 50 条
  • [1] Communication-efficient federated learning via knowledge distillation
    Wu, Chuhan
    Wu, Fangzhao
    Lyu, Lingjuan
    Huang, Yongfeng
    Xie, Xing
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [2] Communication-Efficient Federated Distillation with Active Data Sampling
    Liu, Lumin
    Zhang, Jun
    Song, S. H.
    Letaief, Khaled B.
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 201 - 206
  • [3] FedDD: Federated Double Distillation in IoV
    Yang, Peng
    Yan, Mengjiao
    Cui, Yaping
    He, Peng
    Wu, Dapeng
    Wang, Ruyan
    Chen, Luo
    2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL), 2022,
  • [4] Communication-Efficient Federated Distillation: Theoretical Analysis and Performance Enhancement
    Liu, Lumin
    Zhang, Jun
    Song, Shenghui
    Letaief, Khaled B.
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (23): : 37959 - 37973
  • [5] Prototype Similarity Distillation for Communication-Efficient Federated Unsupervised Representation Learning
    Zhang, Chen
    Xie, Yu
    Chen, Tingbin
    Mao, Wenjie
    Yu, Bin
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6865 - 6876
  • [6] Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
    Tian, Yuchen
    Wang, Jiacheng
    Jin, Yueming
    Wang, Liansheng
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023 WORKSHOPS, 2023, 14393 : 14 - 24
  • [7] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [8] Communication-Efficient and Attack-Resistant Federated Edge Learning With Dataset Distillation
    Zhou, Yanlin
    Ma, Xiyao
    Wu, Dapeng
    Li, Xiaolin
    IEEE TRANSACTIONS ON CLOUD COMPUTING, 2023, 11 (03) : 2517 - 2528
  • [9] To Distill or Not to Distill: Toward Fast, Accurate, and Communication-Efficient Federated Distillation Learning
    Zhang, Yuan
    Zhang, Wenlong
    Pu, Lingjun
    Lin, Tao
    Yan, Jinyao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06) : 10040 - 10053
  • [10] Communication-efficient Federated Learning for UAV Networks with Knowledge Distillation and Transfer Learning
    Li, Yalong
    Wu, Celimuge
    Du, Zhaoyang
    Zhong, Lei
    Yoshinaga, Tsutomu
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5739 - 5744