HCFL: A High Compression Approach for Communication-Efficient Federated Learning in Very Large Scale IoT Networks

被引:13
|
作者
Nguyen, Minh-Duong [1 ]
Lee, Sang-Min [1 ]
Pham, Quoc-Viet [2 ]
Hoang, Dinh Thai [3 ]
Nguyen, Diep N. [3 ]
Hwang, Won-Joo [4 ]
机构
[1] Pusan Natl Univ, Dept Informat Convergence Engn, Pusan 46241, South Korea
[2] Pusan Natl Univ, Korean Southeast Ctr Ind Revolut Leader Educ 4, Pusan 46241, South Korea
[3] Univ Technol Sydney, Sch Elect & Data Engn, Sydney, NSW 2007, Australia
[4] Pusan Natl Univ, Dept Biomed Convergence Engn, Yangsan 50612, South Korea
基金
新加坡国家研究基金会; 澳大利亚研究理事会;
关键词
Autoencoder; communication efficiency; data compression; deep learning; distributed learning; federated learning; internet-of-things; machine type communication;
D O I
10.1109/TMC.2022.3190510
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a new artificial intelligence concept that enables Internet-of-Things (IoT) devices to learn a collaborative model without sending the raw data to centralized nodes for processing. Despite numerous advantages, low computing resources at IoT devices and high communication costs for exchanging model parameters make applications of FL in massive IoT networks very limited. In this work, we develop a novel compression scheme for FL, called high-compression federated learning (HCFL) , for very large scale IoT networks. HCFL can reduce the data load for FL processes without changing their structure and hyperparameters. In this way, we not only can significantly reduce communication costs, but also make intensive learning processes more adaptable on low-computing resource IoT devices. Furthermore, we investigate a relationship between the number of IoT devices and the convergence level of the FL model and thereby better assess the quality of the FL process. We demonstrate our HCFL scheme in both simulations and mathematical analyses. Our proposed theoretical research can be used as a minimum level of satisfaction, proving that the FL process can achieve good performance when a determined configuration is met. Therefore, we show that HCFL is applicable in any FL-integrated networks with numerous IoT devices.
引用
收藏
页码:6495 / 6507
页数:13
相关论文
共 50 条
  • [31] Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC
    Ren, Yaoyao
    Cao, Yu
    Ye, Chengyin
    Cheng, Xu
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [32] Communication-Efficient Federated Learning in Drone-Assisted IoT Networks: Path Planning and Enhanced Knowledge Distillation Techniques
    Gad, Gad
    Farrag, Aya
    Fadlullah, Zubair Md
    Fouda, Mostafa M.
    2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [33] Communication-Efficient Federated Learning and Permissioned Blockchain for Digital Twin Edge Networks
    Lu, Yunlong
    Huang, Xiaohong
    Zhang, Ke
    Maharjan, Sabita
    Zhang, Yan
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (04) : 2276 - 2288
  • [34] Federated Learning with Autotuned Communication-Efficient Secure Aggregation
    Bonawitz, Keith
    Salehi, Fariborz
    Konecny, Jakub
    McMahan, Brendan
    Gruteser, Marco
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1222 - 1226
  • [35] Communication-efficient federated learning via knowledge distillation
    Wu, Chuhan
    Wu, Fangzhao
    Lyu, Lingjuan
    Huang, Yongfeng
    Xie, Xing
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [36] On the Design of Communication-Efficient Federated Learning for Health Monitoring
    Chu, Dong
    Jaafar, Wael
    Yanikomeroglu, Halim
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1128 - 1133
  • [37] ALS Algorithm for Robust and Communication-Efficient Federated Learning
    Hurley, Neil
    Duriakova, Erika
    Geraci, James
    O'Reilly-Morgan, Diarmuid
    Tragos, Elias
    Smyth, Barry
    Lawlor, Aonghus
    PROCEEDINGS OF THE 2024 4TH WORKSHOP ON MACHINE LEARNING AND SYSTEMS, EUROMLSYS 2024, 2024, : 56 - 64
  • [38] Communication-Efficient Federated Learning For Massive MIMO Systems
    Mu, Yuchen
    Garg, Navneet
    Ratnarajah, Tharmalingam
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 578 - 583
  • [39] FedSup: A communication-efficient federated learning fatigue driving behaviors supervision approach
    Zhao, Chen
    Gao, Zhipeng
    Wang, Qian
    Xiao, Kaile
    Mo, Zijia
    Deen, M. Jamal
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 138 : 52 - 60
  • [40] Communication-Efficient Design for Quantized Decentralized Federated Learning
    Chen, Li
    Liu, Wei
    Chen, Yunfei
    Wang, Weidong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 1175 - 1188