Communication-efficient and Scalable Decentralized Federated Edge Learning

被引:0
|
作者
Yapp, Austine Zong Han [1 ]
Koh, Hong Soo Nicholas [1 ]
Lai, Yan Ting [1 ]
Kang, Jiawen [1 ]
Li, Xuandi [1 ]
Ng, Jer Shyuan [2 ]
Jiang, Hongchao [2 ]
Lim, Wei Yang Bryan [2 ]
Xiong, Zehui [3 ]
Niyato, Dusit [1 ]
机构
[1] Nanyang Technol Univ NTU, Sch Comp Sci & Engn, Singapore, Singapore
[2] Nanyang Technol Univ, Alibaba NTU Singapore Joint Res Inst JRI, Singapore, Singapore
[3] Singapore Univ Technol & Design SUTD, Singapore, Singapore
基金
新加坡国家研究基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Edge Learning (FEL) is a distributed Machine Learning (ML) framework for collaborative training on edge devices. FEL improves data privacy over traditional centralized ML model training by keeping data on the devices and only sending local model updates to a central coordinator for aggregation. However, challenges still remain in existing FEL architectures where there is high communication overhead between edge devices and the coordinator. In this paper, we present a working prototype of blockchain-empowered and communication-efficient FEL framework, which enhances the security and scalability towards large-scale implementation of FEL.
引用
收藏
页码:5032 / 5035
页数:4
相关论文
共 50 条
  • [31] Ternary Compression for Communication-Efficient Federated Learning
    Xu, Jinjin
    Du, Wenli
    Jin, Yaochu
    He, Wangli
    Cheng, Ran
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (03) : 1162 - 1176
  • [32] Robust communication-efficient decentralized learning with heterogeneity
    Zhang, Xiao
    Wang, Yangyang
    Chen, Shuzhen
    Wang, Cui
    Yu, Dongxiao
    Cheng, Xiuzhen
    JOURNAL OF SYSTEMS ARCHITECTURE, 2023, 141
  • [33] Communication-Efficient Federated Edge Learning for NR-U-Based IIoT Networks
    Chen, Qimei
    Xu, Xiaoxia
    You, Zehua
    Jiang, Hao
    Zhang, Jun
    Wang, Fei-Yue
    IEEE INTERNET OF THINGS JOURNAL, 2021, 9 (14) : 12450 - 12459
  • [34] A Communication-Efficient Hierarchical Federated Learning Framework via Shaping Data Distribution at Edge
    Deng, Yongheng
    Lyu, Feng
    Xia, Tengxi
    Zhou, Yuezhi
    Zhang, Yaoxue
    Ren, Ju
    Yang, Yuanyuan
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (03) : 2600 - 2615
  • [35] Communication-Efficient Federated Optimization Over Semi-Decentralized Networks
    Wang, He
    Chi, Yuejie
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2025, 11 : 147 - 160
  • [36] Communication-Efficient and Byzantine-Robust Federated Learning for Mobile Edge Computing Networks
    Zhang, Zhuangzhuang
    Wl, Libing
    He, Debiao
    Li, Jianxin
    Cao, Shuqin
    Wu, Xianfeng
    IEEE NETWORK, 2023, 37 (04): : 112 - 119
  • [37] Massive Digital Over-the-Air Computation for Communication-Efficient Federated Edge Learning
    Qiao, Li
    Gao, Zhen
    Mashhadi, Mahdi Boloursaz
    Gunduz, Deniz
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2024, 42 (11) : 3078 - 3094
  • [38] QSFL: Two-Level Communication-Efficient Federated Learning on Mobile Edge Devices
    Yi, Liping
    Wang, Gang
    Wang, Xiaofei
    Liu, Xiaoguang
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (06) : 4166 - 4182
  • [39] Federated Learning with Autotuned Communication-Efficient Secure Aggregation
    Bonawitz, Keith
    Salehi, Fariborz
    Konecny, Jakub
    McMahan, Brendan
    Gruteser, Marco
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1222 - 1226
  • [40] ALS Algorithm for Robust and Communication-Efficient Federated Learning
    Hurley, Neil
    Duriakova, Erika
    Geraci, James
    O'Reilly-Morgan, Diarmuid
    Tragos, Elias
    Smyth, Barry
    Lawlor, Aonghus
    PROCEEDINGS OF THE 2024 4TH WORKSHOP ON MACHINE LEARNING AND SYSTEMS, EUROMLSYS 2024, 2024, : 56 - 64