Efficient Asynchronous Multi-Participant Vertical Federated Learning

被引:3
|
作者
Shi, Haoran [1 ]
Xu, Yonghui [2 ,3 ]
Jiang, Yali [1 ]
Yu, Han [4 ]
Cui, Lizhen [1 ,2 ]
机构
[1] Shandong Univ, Sch Software, Jinan 250100, Peoples R China
[2] Shandong Univ, Joint SDU NTU Ctr Artificial Intelligence Res C FA, Jinan 250100, Peoples R China
[3] China Singapore Int Joint Res Inst, Guangzhou 510000, Peoples R China
[4] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
基金
新加坡国家研究基金会;
关键词
Computational modeling; Stochastic processes; Training; Data models; Collaborative work; Privacy; Servers; Federated learning; privacy-preserving; asynchronous distributed computation;
D O I
10.1109/TBDATA.2022.3201729
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vertical Federated Learning (VFL) is a private-preserving distributed machine learning paradigm that collaboratively trains machine learning models with participants whose local data overlap largely in the sample space, but not so in the feature space. Existing VFL methods are mainly based on synchronous computation and homomorphic encryption (HE). Due to the differences in the communication and computation resources of the participants, straggling participants can cause delays during synchronous VFL model training, resulting in low computational efficiency. In addition, HE incurs high computation and communication costs. Moreover, it is difficult to establish a VFL coordinator (a.k.a. server) that all participants can trust. To address these problems, we propose an efficient Asynchronous Multi-participant Vertical Federated Learning method (AMVFL). AMVFL leverages asynchronous training which reduces waiting time. At the same time, secret sharing is used instead of HE for privacy protection, which further reduces the computational cost. In addition, AMVFL does not require a trusted entity to serve as the VFL coordinator. Experimental results based on real-world and synthetic datasets demonstrate that AMVFL can significantly reduce computational cost and improve the accuracy of the model compared to five state-of-the-art VFL methods.
引用
收藏
页码:940 / 952
页数:13
相关论文
共 50 条
  • [21] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    ALGORITHMS, 2022, 15 (08)
  • [22] Memorandum: A Mobile App for Efficient Note Keeping in Concurrent Multi-Participant Human Subject Studies
    Stefanopoulos, L.
    Maramis, C.
    Moulos, I.
    Maglaveras, N.
    Ioakimidis, I.
    2017 IEEE 30TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS), 2017, : 498 - 499
  • [23] Participant Selection for Efficient and Trusted Federated Learning in Blockchain-Assisted Hierarchical Federated Learning Architectures
    Liu, Peng
    Jia, Lili
    Xiao, Yang
    FUTURE INTERNET, 2025, 17 (02)
  • [24] FedDGIC: Reliable and Efficient Asynchronous Federated Learning with Gradient Compensation
    Xie, Zaipeng
    Jiang, Junchen
    Chen, Ruifeng
    Qu, Zhihao
    Liu, Hanxiang
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 98 - 105
  • [25] Time Efficient Federated Learning with Semi-asynchronous Communication
    Hao, Jiangshan
    Zhao, Yanchao
    Zhang, Jiale
    2020 IEEE 26TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2020, : 156 - 163
  • [26] Efficient asynchronous federated neuromorphic learning of spiking neural networks
    Wang, Yuan
    Duan, Shukai
    Chen, Feng
    NEUROCOMPUTING, 2023, 557
  • [27] Towards Efficient Asynchronous Federated Learning in Heterogeneous Edge Environments
    Zhou, Yajie
    Pang, Xiaoyi
    Wang, Zhibo
    Hu, Jiahui
    Sun, Peng
    Ren, Kui
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 2448 - 2457
  • [28] AEDFL: Efficient Asynchronous Decentralized Federated Learning with Heterogeneous Devices
    Liu, Ji
    Che, Tianshi
    Zhou, Yang
    Jin, Ruoming
    Dai, Huaiyu
    Dou, Dejing
    Valduriez, Patrick
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 833 - 841
  • [29] Pisces: Efficient Federated Learning via Guided Asynchronous Training
    Jiang, Zhifeng
    Wang, Wei
    Li, Baochun
    Li, Bo
    PROCEEDINGS OF THE 13TH SYMPOSIUM ON CLOUD COMPUTING, SOCC 2022, 2022, : 370 - 385
  • [30] Multi-scale Conformer Fusion Network for Multi-participant Behavior Analysis
    Song, Qiya
    Dian, Renwei
    Sun, Bin
    Xie, Jie
    Li, Shutao
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9472 - 9476