Differentially Private and Fair Machine Learning: A Benchmark Study

被引:0
|
作者
Eponeshnikov, Alexander [1 ]
Bakhtadze, Natalia [2 ]
Smirnova, Gulnara [3 ]
Sabitov, Rustem [3 ]
Sabitov, Shamil [1 ]
机构
[1] Kazan Fed Univ, Kazan, Russia
[2] RAS, Inst Control Sci, Moscow 117997, Russia
[3] Kazan Natl Res Tech Univ, Kazan, Russia
来源
IFAC PAPERSONLINE | 2024年 / 58卷 / 19期
关键词
differential privacy; machine learning; adversarial learning; fairness; accuracy; privacy-preserving models;
D O I
10.1016/j.ifacol.2024.09.192
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing adoption of machine learning systems, concerns around bias and privacy have gained significant research interest. This work investigates the intersection of algorithmic fairness and differential privacy by evaluating differentially private fair representations. The LAFTR framework aims to learn fair data representations while maintaining utility. Differential privacy is injected into model training using DP-SGD to provide formal privacy guarantees. Experiments are conducted on the Adult, German Credit, and CelebA datasets, with gender and age as sensitive attributes. The models are evaluated across various configurations, including the privacy budget epsilon, adversary strength, and dataset characteristics. Results demonstrate that with proper tuning, differentially private models can achieve fair representations comparable or better than non-private models. However, introducing privacy reduces stability during training. Overall, the analysis provides insights into the tradeoffs between accuracy, fairness and privacy for different model configurations across datasets. The results establish a benchmark for further research into differentially private and fair machine learning models, advancing the understanding of training under an adversary. Copyright (C) 2024 The Authors. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)
引用
收藏
页码:277 / 282
页数:6
相关论文
共 50 条
  • [31] Learning Rate Adaptation for Differentially Private Learning
    Koskela, Antti
    Honkela, Antti
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2465 - 2474
  • [32] Tutorial on Fair and Private Deep Learning
    Padala, Manisha
    Damle, Sankarshan
    Gujar, Sujit
    PROCEEDINGS OF 7TH JOINT INTERNATIONAL CONFERENCE ON DATA SCIENCE AND MANAGEMENT OF DATA, CODS-COMAD 2024, 2024, : 510 - 513
  • [33] BDPL: A Boundary Differentially Private Layer Against Machine Learning Model Extraction Attacks
    Zheng, Huadi
    Ye, Qingqing
    Hu, Haibo
    Fang, Chengfang
    Shi, Jie
    COMPUTER SECURITY - ESORICS 2019, PT I, 2019, 11735 : 66 - 83
  • [34] Distributionally-robust machine learning using locally differentially-private data
    Farokhi, Farhad
    OPTIMIZATION LETTERS, 2022, 16 (04) : 1167 - 1179
  • [35] Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning
    Farokhi, Farhad
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 1695 - 1700
  • [36] Distributionally-robust machine learning using locally differentially-private data
    Farhad Farokhi
    Optimization Letters, 2022, 16 : 1167 - 1179
  • [37] A Practical Differentially Private Support Vector Machine
    Xu, Feifei
    Peng, Jia
    Xiang, Ji
    Zha, Daren
    2019 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI 2019), 2019, : 1237 - 1242
  • [38] Fair Algorithms for Machine Learning
    Kearns, Michael
    EC'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON ECONOMICS AND COMPUTATION, 2017, : 1 - 1
  • [39] Quantum Fair Machine Learning
    Perrier, Elija
    AIES '21: PROCEEDINGS OF THE 2021 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2021, : 843 - 853
  • [40] Paradoxes in Fair Machine Learning
    Golz, Paul
    Kahng, Anson
    Procaccia, Ariel D.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32