Topology Design and Graph Embedding for Decentralized Federated Learning

被引:0
|
作者
Duan Y. [1 ]
Li X. [1 ]
Wu J. [1 ]
机构
[1] Temple University, Department of Computer and Information Sciences, Philadelphia, 19122, PA
来源
Intelligent and Converged Networks | 2024年 / 5卷 / 02期
关键词
data heterogeneity; decentralized federated learning; graph embedding; network topology;
D O I
10.23919/ICN.2024.0008
中图分类号
学科分类号
摘要
Federated learning has been widely employed in many applications to protect the data privacy of participating clients. Although the dataset is decentralized among training devices in federated learning, the model parameters are usually stored in a centralized manner. Centralized federated learning is easy to implement; however, a centralized scheme causes a communication bottleneck at the central server, which may significantly slow down the training process. To improve training efficiency, we investigate the decentralized federated learning scheme. The decentralized scheme has become feasible with the rapid development of device-to-device communication techniques under 5G. Nevertheless, the convergence rate of learning models in the decentralized scheme depends on the network topology design. We propose optimizing the topology design to improve training efficiency for decentralized federated learning, which is a non-trivial problem, especially when considering data heterogeneity. In this paper, we first demonstrate the advantage of hypercube topology and present a hypercube graph construction method to reduce data heterogeneity by carefully selecting neighbors of each training device - a process that resembles classic graph embedding. In addition, we propose a heuristic method for generating torus graphs. Moreover, we have explored the communication patterns in hypercube topology and propose a sequential synchronization scheme to reduce communication cost during training. A batch synchronization scheme is presented to fine-tune the communication pattern for hypercube topology. Experiments on real-world datasets show that our proposed graph construction methods can accelerate the training process, and our sequential synchronization scheme can significantly reduce the overall communication traffic during training. © 2020 Tsinghua University Press.
引用
收藏
页码:100 / 115
页数:15
相关论文
共 50 条
  • [21] Communication-Efficient Design for Quantized Decentralized Federated Learning
    Chen, Li
    Liu, Wei
    Chen, Yunfei
    Wang, Weidong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 1175 - 1188
  • [22] A Blockchain Based Decentralized Gradient Aggregation Design for Federated Learning
    Zhao, Jian
    Wu, Xin
    Zhang, Yan
    Wu, Yu
    Wang, Zhi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 : 359 - 371
  • [23] Design of Anti-Plagiarism Mechanisms in Decentralized Federated Learning
    Shao, Yumeng
    Li, Jun
    Ding, Ming
    Wei, Kang
    Ma, Chuan
    Shi, Long
    Chen, Wen
    Jin, Shi
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1465 - 1479
  • [24] RIS-Empowered Topology Control for Decentralized Federated Learning in Urban Air Mobility
    Xiong, Kai
    Wang, Rui
    Leng, Supeng
    Huang, Chongwen
    Yuen, Chau
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 40757 - 40770
  • [25] Topology Learning for Heterogeneous Decentralized Federated Learning Over Unreliable D2D Networks
    Wu, Zheshun
    Xu, Zenglin
    Zeng, Dun
    Li, Junfan
    Liu, Jie
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (08) : 12201 - 12206
  • [26] Gossip Learning as a Decentralized Alternative to Federated Learning
    Hegedus, Istvan
    Danner, Gabor
    Jelasity, Mark
    DISTRIBUTED APPLICATIONS AND INTEROPERABLE SYSTEMS, DAIS 2019, 2019, 11534 : 74 - 90
  • [27] Decentralized Federated Learning with Prototype Exchange
    Qi, Lu
    Chen, Haoze
    Zou, Hongliang
    Chen, Shaohua
    Zhang, Xiaoying
    Chen, Hongyan
    MATHEMATICS, 2025, 13 (02)
  • [28] Fedstellar: A Platform for Decentralized Federated Learning
    Beltran, Enrique Tomas Martinez
    Gomez, angel Luis Perales
    Feng, Chao
    Sanchez, Pedro Miguel
    Bernal, Sergio Lopez
    Bovet, Gerome
    Perez, Manuel Gil
    Perez, Gregorio Martinez
    Celdran, Alberto Huertas
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 242
  • [29] Towards Efficient Decentralized Federated Learning
    Pappas, Christodoulos
    Papadopoulos, Dimitrios
    Chatzopoulos, Dimitris
    Panagou, Eleni
    Lalis, Spyros
    Vavalis, Manolis
    2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS WORKSHOPS (ICDCSW), 2022, : 79 - 85
  • [30] STATISTICAL INFERENCE FOR DECENTRALIZED FEDERATED LEARNING
    Gu, Jia
    Chen, Song xi
    ANNALS OF STATISTICS, 2024, 52 (06): : 2931 - 2955