Differentially Private and Fair Machine Learning: A Benchmark Study

被引:0
|
作者
Eponeshnikov, Alexander [1 ]
Bakhtadze, Natalia [2 ]
Smirnova, Gulnara [3 ]
Sabitov, Rustem [3 ]
Sabitov, Shamil [1 ]
机构
[1] Kazan Fed Univ, Kazan, Russia
[2] RAS, Inst Control Sci, Moscow 117997, Russia
[3] Kazan Natl Res Tech Univ, Kazan, Russia
来源
IFAC PAPERSONLINE | 2024年 / 58卷 / 19期
关键词
differential privacy; machine learning; adversarial learning; fairness; accuracy; privacy-preserving models;
D O I
10.1016/j.ifacol.2024.09.192
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing adoption of machine learning systems, concerns around bias and privacy have gained significant research interest. This work investigates the intersection of algorithmic fairness and differential privacy by evaluating differentially private fair representations. The LAFTR framework aims to learn fair data representations while maintaining utility. Differential privacy is injected into model training using DP-SGD to provide formal privacy guarantees. Experiments are conducted on the Adult, German Credit, and CelebA datasets, with gender and age as sensitive attributes. The models are evaluated across various configurations, including the privacy budget epsilon, adversary strength, and dataset characteristics. Results demonstrate that with proper tuning, differentially private models can achieve fair representations comparable or better than non-private models. However, introducing privacy reduces stability during training. Overall, the analysis provides insights into the tradeoffs between accuracy, fairness and privacy for different model configurations across datasets. The results establish a benchmark for further research into differentially private and fair machine learning models, advancing the understanding of training under an adversary. Copyright (C) 2024 The Authors. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
引用
收藏
页码:277 / 282
页数:6
相关论文
共 50 条
  • [41] Differentially Private Hypothesis Transfer Learning
    Wang, Yang
    Gu, Quanquan
    Brown, Donald
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT II, 2019, 11052 : 811 - 826
  • [42] DIFFERENTIALLY PRIVATE LEARNING OF GEOMETRIC CONCEPTS
    Kaplan H.
    Mansour Y.
    Matias Y.
    Stemmer U.
    SIAM Journal on Optimization, 2022, 32 (03) : 952 - 974
  • [43] Differentially Private Learning of Geometric Concepts
    Kaplan, Haim
    Mansour, Yishay
    Matias, Yossi
    Stemmer, Uri
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [44] Differentially private distributed estimation and learning
    Papachristou, Marios
    Rahimian, M. Amin
    IISE TRANSACTIONS, 2024,
  • [45] Differentially Private Distributed Online Learning
    Li, Chencheng
    Zhou, Pan
    Xiong, Li
    Wang, Qian
    Wang, Ting
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2018, 30 (08) : 1440 - 1453
  • [46] Differentially private ensemble learning for classification
    Li, Xianxian
    Liu, Jing
    Liu, Songfeng
    Wang, Jinyan
    NEUROCOMPUTING, 2021, 430 : 34 - 46
  • [47] Differentially Private Learning with Adaptive Clipping
    Andrew, Galen
    Thakkar, Om
    McMahan, H. Brendan
    Ramaswamy, Swaroop
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [48] Differentially Private Pairwise Learning Revisited
    Xue, Zhiyu
    Yang, Shaoyang
    Huai, Mengdi
    Wang, Di
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3242 - 3248
  • [49] DIFFERENTIALLY PRIVATE LEARNING OF GEOMETRIC CONCEPTS
    Kaplan, Haim
    Mansour, Yishay
    Matias, Yossi
    Stemmer, Uri
    SIAM JOURNAL ON COMPUTING, 2022, 51 (04) : 952 - 974
  • [50] Differentially Private Learning with Margin Guarantees
    Bassily, Raef
    Mohri, Mehryar
    Suresh, Ananda Theertha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,