Robust federated learning under statistical heterogeneity via hessian-weighted aggregation

被引:7
|
作者
Ahmad, Adnan [1 ]
Luo, Wei [1 ]
Robles-Kelly, Antonio [1 ]
机构
[1] Deakin Univ, Sch Informat Technol, Geelong, Australia
关键词
Federated learning; Model aggregation; Gauss-Newton methods in federated learning; ALGORITHMS;
D O I
10.1007/s10994-022-06292-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In federated learning, client models are often trained on local training sets that vary in size and distribution. Such statistical heterogeneity in training data leads to performance variations across local models. Even within a model, some parameter estimates can be more reliable than others. Most existing FL approaches (such as FedAvg), however, do not explicitly address such variations in client parameter estimates and treat all local parameters with equal importance in the model aggregation. This disregard of varying evidential credence among client models often leads to slow convergence and a sensitive global model. We address this gap by proposing an aggregation mechanism based upon the Hessian matrix. Further, by making use of the first-order information of the loss function, we can use the Hessian as a scaling matrix in a manner akin to that employed in Quasi-Newton methods. This treatment captures the impact of data quality variations across local models. Experiments show that our method is superior to the baselines of Federated Average (FedAvg), FedProx, Federated Curvature (FedCurv) and Federated Newton Learn (FedNL) for image classification on MNIST, Fashion-MNIST, and CIFAR-10 datasets when the client models are trained using statistically heterogeneous data.
引用
收藏
页码:633 / 654
页数:22
相关论文
共 50 条
  • [11] Robust Aggregation Function in Federated Learning
    Taheri, Rahim
    Arabikhan, Farzad
    Gegov, Alexander
    Akbari, Negar
    ADVANCES IN INFORMATION SYSTEMS, ARTIFICIAL INTELLIGENCE AND KNOWLEDGE MANAGEMENT, ICIKS 2023, 2024, 486 : 168 - 175
  • [12] Byzantine-robust Federated Learning via Cosine Similarity Aggregation
    Zhu, Tengteng
    Guo, Zehua
    Yao, Chao
    Tan, Jiaxin
    Dou, Songshi
    Wang, Wenrun
    Han, Zhenzhen
    COMPUTER NETWORKS, 2024, 254
  • [13] A Practical Recipe for Federated Learning under Statistical Heterogeneity Experimental Design
    Morafah M.
    Wang W.
    Lin B.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1708 - 1717
  • [14] FedSkip: Combatting Statistical Heterogeneity with Federated Skip Aggregation
    Fan, Ziqing
    Wang, Yanfeng
    Yao, Jiangchao
    Lyu, Lingjuan
    Zhang, Ya
    Tian, Qi
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 131 - 140
  • [15] WVFL: Weighted Verifiable Secure Aggregation in Federated Learning
    Zhong, Yijian
    Tan, Wuzheng
    Xu, Zhifeng
    Chen, Shixin
    Weng, Jiasi
    Weng, Jian
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19926 - 19936
  • [16] Byzantine-Robust Aggregation for Federated Learning with Reinforcement Learning
    Yan, Sizheng
    Du, Junping
    Xue, Zhe
    Li, Ang
    WEB AND BIG DATA, APWEB-WAIM 2024, PT IV, 2024, 14964 : 152 - 166
  • [17] Robust Aggregation for Federated Learning by Minimum γ-Divergence Estimation
    Li, Cen-Jhih
    Huang, Pin-Han
    Ma, Yi-Ting
    Hung, Hung
    Huang, Su-Yun
    ENTROPY, 2022, 24 (05)
  • [18] Federated Learning Aggregation: New Robust Algorithms with Guarantees
    Ben Mansour, Adnan
    Carenini, Gaia
    Duplessis, Alexandre
    Naccache, David
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 721 - 726
  • [19] Robust Secure Aggregation with Lightweight Verification for Federated Learning
    Huang, Chao
    Yao, Yanqing
    Zhang, Xiaojun
    Teng, Da
    Wang, Yingdong
    Zhou, Lei
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 582 - 589
  • [20] RTGA: Robust ternary gradients aggregation for federated learning
    Yang, Chengang
    Xiao, Danyang
    Cao, Bokai
    Wu, Weigang
    INFORMATION SCIENCES, 2022, 616 : 427 - 443