Ensemble Distillation Based Adaptive Quantization for Supporting Federated Learning in Wireless Networks

被引:5
|
作者
Liu, Yi-Jing [1 ,2 ]
Feng, Gang [1 ,2 ]
Niyato, Dusit [3 ]
Qin, Shuang [1 ,2 ]
Zhou, Jianhong [4 ]
Li, Xiaoqian [1 ,2 ]
Xu, Xinyi [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Natl Key Lab Commun, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Huzhou, Huzhou 313001, Peoples R China
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[4] Xihua Univ, Sch Comp & Software Engn, Chengdu 610039, Peoples R China
基金
美国国家科学基金会;
关键词
Federated learning; wireless network; adaptive quantization; ensemble distillation; heterogeneous models; AGGREGATION;
D O I
10.1109/TWC.2022.3222717
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) has become a promising technique for developing intelligent wireless networks. In traditional FL paradigms, local models are usually required to be homogeneous for aggregation. However, due to heterogeneous models coming with wireless sysTem heterogeneity, it is preferable for user equipments (UEs) to undertake appropriate amount of computing and/or data transmission work based on sysTem constraints. Meanwhile, considerable communication costs are incurred by model training, when a large number of UEs participate in FL and/or the transmitted models are large. Therefore, resource-efficient training schemes for heterogeneous models are essential for enabling FL-based intelligent wireless networks. In this paper, we propose an adaptive quantization scheme based on ensemble distillation (AQeD), to facilitate heterogeneous model training. We first partition and group the participating UEs into clusters, where the local models in specific clusters are homogeneous with different quantization levels. Then we propose an augmented loss function by jointly considering ensemble distillation loss, quantization levels and wireless resources constraints. In AQeD, model aggregations are performed at two levels: model aggregation for individual clusters and distillation loss aggregation for cluster ensembles. Numerical results show that the AQeD scheme can significantly reduce communication costs and training time in comparison with some state-of-the-art solutions.
引用
收藏
页码:4013 / 4027
页数:15
相关论文
共 50 条
  • [31] FedED: Federated Learning via Ensemble Distillation for Medical Relation Extraction
    Sui, Dianbo
    Chen, Yubo
    Zhao, Jun
    Jia, Yantao
    Xie, Yuantao
    Sun, Weijian
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2118 - 2128
  • [32] Rate distortion optimization for adaptive gradient quantization in federated learning
    Guojun Chen
    Kaixuan Xie
    Wenqiang Luo
    Yinfei Xu
    Lun Xin
    Tiecheng Song
    Jing Hu
    Digital Communications and Networks, 2024, 10 (06) : 1813 - 1825
  • [33] Rate distortion optimization for adaptive gradient quantization in federated learning
    Chen, Guojun
    Xie, Kaixuan
    Luo, Wenqiang
    Xu, Yinfei
    Xin, Lun
    Song, Tiecheng
    Hu, Jing
    Digital Communications and Networks, 2024, 10 (06) : 1813 - 1825
  • [34] Adaptive Sparsification and Quantization for Enhanced Energy Efficiency in Federated Learning
    Marnissi, Ouiame
    El Hammouti, Hajar
    Bergou, El Houcine
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2024, 5 : 4307 - 4321
  • [35] FedACQ: adaptive clustering quantization of model parameters in federated learning
    Tian, Tingting
    Shi, Hongjian
    Ma, Ruhui
    Liu, Yuan
    INTERNATIONAL JOURNAL OF WEB INFORMATION SYSTEMS, 2024, 20 (01) : 88 - 110
  • [36] FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation
    Tang, Jianwu
    Ding, Xuefeng
    Hu, Dasha
    Guo, Bing
    Shen, Yuncheng
    Ma, Pan
    Jiang, Yuming
    SENSORS, 2023, 23 (14)
  • [37] Fedadkd:heterogeneous federated learning via adaptive knowledge distillation
    Song, Yalin
    Liu, Hang
    Zhao, Shuai
    Jin, Haozhe
    Yu, Junyang
    Liu, Yanhong
    Zhai, Rui
    Wang, Longge
    PATTERN ANALYSIS AND APPLICATIONS, 2024, 27 (04)
  • [38] Adaptive Backdoor Attacks Against Dataset Distillation for Federated Learning
    Chai, Ze
    Gao, Zhipeng
    Lin, Yijing
    Zhao, Chen
    Yu, Xinlei
    Xie, Zhiqiang
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 4614 - 4619
  • [39] Adaptive User Selection and Bandwidth Allocation for Fast Convergence of Federated Learning in Wireless Networks
    Pan, Jiaqi
    Chen, Zhikun
    Zhao, Ming
    Zhang, Sihai
    Zhu, Jinkang
    2023 INTERNATIONAL CONFERENCE ON FUTURE COMMUNICATIONS AND NETWORKS, FCN, 2023,
  • [40] Latency Minimization for TDMA-Based Wireless Federated Learning Networks
    Xu, Ding
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (09) : 13974 - 13979