Advancing Personalized Federated Learning: Group Privacy, Fairness, and Beyond

被引:0
|
作者
Galli F. [1 ,3 ]
Jung K. [2 ]
Biswas S. [2 ,4 ]
Palamidessi C. [2 ,4 ]
Cucinotta T. [3 ]
机构
[1] Scuola Normale Superiore, Pisa
[2] INRIA, Palaiseau
[3] Scuola Superiore Sant’Anna, Pisa
[4] École Polytechnique, Palaiseau
关键词
Fairness; Federated learning; Metric privacy; Personalized models;
D O I
10.1007/s42979-023-02292-0
中图分类号
学科分类号
摘要
Federated learning (FL) is a framework for training machine learning models in a distributed and collaborative manner. During training, a set of participating clients process their data stored locally, sharing only updates of the statistical model’s parameters obtained by minimizing a cost function over their local inputs. FL was proposed as a stepping-stone towards privacy-preserving machine learning, but it has been shown to expose clients to issues such as leakage of private information, lack of personalization of the model, and the possibility of having a trained model that is fairer to some groups of clients than to others. In this paper, the focus is on addressing the triadic interaction among personalization, privacy guarantees, and fairness attained by trained models within the FL framework. Differential privacy and its variants have been studied and applied as cutting-edge standards for providing formal privacy guarantees. However, clients in FL often hold very diverse datasets representing heterogeneous communities, making it important to protect their sensitive and personal information while still ensuring that the trained model upholds the aspect of fairness for the users. To attain this objective, a method is put forth that introduces group privacy assurances through the utilization of d-privacy (aka metric privacy). d-privacy represents a localized form of differential privacy that relies on a metric-oriented obfuscation approach to maintain the original data’s topological distribution. This method, besides enabling personalized model training in a federated approach and providing formal privacy guarantees, possesses significantly better group fairness measured under a variety of standard metrics than a global model trained within a classical FL template. Theoretical justifications for the applicability are provided, as well as experimental validation on real-world datasets to illustrate the working of the proposed method. © 2023, The Author(s).
引用
收藏
相关论文
共 50 条
  • [1] Ensuring Fairness and Gradient Privacy in Personalized Heterogeneous Federated Learning
    Lewis, Cody
    Varadharajan, Vijay
    Noman, Nasimul
    Tupakula, Uday
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (03)
  • [2] Enforcing group fairness in privacy-preserving Federated Learning
    Chen, Chaomeng
    Zhou, Zhenhong
    Tang, Peng
    He, Longzhu
    Su, Sen
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 160 : 890 - 900
  • [3] Personalized Federated Learning With Differential Privacy
    Hu, Rui
    Guo, Yuanxiong
    Li, Hongning
    Pei, Qingqi
    Gong, Yanmin
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (10) : 9530 - 9539
  • [4] Fairness and privacy preserving in federated learning: A survey
    Rafi, Taki Hasan
    Noor, Faiza Anan
    Hussain, Tahmid
    Chae, Dong-Kyu
    INFORMATION FUSION, 2024, 105
  • [5] Privacy and Fairness in Federated Learning: On the Perspective of Tradeoff
    Chen, Huiqiang
    Zhu, Tianqing
    Zhang, Tao
    Zhou, Wanlei
    Yu, Philip S.
    ACM COMPUTING SURVEYS, 2024, 56 (02)
  • [6] Differential Privacy in HyperNetworks for Personalized Federated Learning
    Nemala, Vaisnavi
    Phung Lai
    NhatHai Phan
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 4224 - 4228
  • [7] Privacy-Preserving Personalized Federated Learning
    Hu, Rui
    Guo, Yuanxiong
    Li, Hongning
    Pei, Qingqi
    Gong, Yanmin
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [8] Personalized Graph Federated Learning With Differential Privacy
    Gauthier F.
    Gogineni V.C.
    Werner S.
    Huang Y.-F.
    Kuh A.
    IEEE Transactions on Signal and Information Processing over Networks, 2023, 9 : 736 - 749
  • [9] The Impact of Differential Privacy on Model Fairness in Federated Learning
    Gu, Xiuting
    Zhu, Tianqing
    Li, Jie
    Zhang, Tao
    Ren, Wei
    NETWORK AND SYSTEM SECURITY, NSS 2020, 2020, 12570 : 419 - 430
  • [10] FairFed: Enabling Group Fairness in Federated Learning
    Ezzeldin, Yahya H.
    Yan, Shen
    He, Chaoyang
    Ferrara, Emilio
    Avestimehr, Salman
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7494 - 7502