Privacy and Fairness Analysis in the Post-Processed Differential Privacy Framework

被引:0
|
作者
Zhao, Ying [1 ]
Zhang, Kai [1 ]
Gao, Longxiang [2 ,3 ]
Chen, Jinjun [1 ]
机构
[1] Swinburne Univ Technol, Dept Comp Technol, Melbourne, Vic 3122, Australia
[2] Qilu Univ Technol, Shandong Comp Sci Ctr, Key Lab Comp Power Network & Informat Secur, Minist Educ,Shandong Acad Sci, Jinan 250316, Peoples R China
[3] Shandong Fundamental Res Ctr Comp Sci, Shandong Prov Key Lab Comp Power Internet & Serv C, Jinan 250000, Peoples R China
关键词
Privacy; Accuracy; Differential privacy; Noise; Resource management; Vectors; Three-dimensional displays; Standards; Sensitivity; Optimization methods; consistency; non-negativity; post-processing; fairness; census data privacy; COUNTS;
D O I
10.1109/TIFS.2025.3528222
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The post-processed Differential Privacy (DP) framework has been routinely adopted to preserve privacy while maintaining important invariant characteristics of datasets in data-release applications such as census data. Typical invariant characteristics include non-negative counts and total population. Subspace DP has been proposed to preserve total population while guaranteeing DP for sub-populations. Non-negativity post-processing has been identified to inherently incur fairness issues. In this work, we study privacy and unfairness (i.e., accuracy disparity) concerns in the post-processed DP framework. On one hand, we propose the post-processed DP framework with both non-negativity and accurate total population as constraints would inadvertently violate privacy guarantee desired by it. Instead, we propose the post-processed subspace DP framework to accurately define privacy guarantees against adversaries. On the other hand, we identify unfairness level is dependent on privacy budget, count sizes as well as their imbalance level via empirical analysis. Particularly concerning is severe unfairness in the setting of strict privacy budgets. We further trace unfairness back to uniform privacy budget setting over different population subgroups. To address this, we propose a varying privacy budget setting method and develop optimization approaches using ternary search and golden ratio search to identify optimal privacy budget ranges that minimize unfairness while maintaining privacy guarantees. Our extensive theoretical and empirical analysis demonstrates the effectiveness of our approaches in addressing severe unfairness issues across different privacy settings and several canonical privacy mechanisms. Using datasets of Australian Census data, Adult dataset, and delinquent children by county and household head education level, we validate both our privacy analysis framework and fairness optimization methods, showing significant reduction in accuracy disparities while maintaining strong privacy guarantees.
引用
收藏
页码:2412 / 2423
页数:12
相关论文
共 50 条
  • [41] A differential privacy framework for matrix factorization recommender systems
    Friedman, Arik
    Berkovsky, Shlomo
    Kaafar, Mohamed Ali
    USER MODELING AND USER-ADAPTED INTERACTION, 2016, 26 (05) : 425 - 458
  • [42] A novel post-processed finite element method and its convergence for partial differential equations
    He, Wenming
    Wu, Jiming
    Zhang, Zhimin
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2025, 457
  • [43] A Comparative Analysis of Differential Privacy Vs other Privacy Mechanisms for Big Data
    Begum, Sayyada Hajera
    Nausheen, Farha
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON INVENTIVE SYSTEMS AND CONTROL (ICISC 2018), 2018, : 512 - 516
  • [44] Privacy-Preserving Genomic Statistical Analysis Under Local Differential Privacy
    Yamamoto, Akito
    Shibuya, Tetsuo
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXVII, DBSEC 2023, 2023, 13942 : 40 - 48
  • [45] Privacy contracts incorporated in a privacy protection framework
    Oberholzer, HJG
    Olivier, MS
    COMPUTER SYSTEMS SCIENCE AND ENGINEERING, 2006, 21 (01): : 5 - 16
  • [46] Bias and Variance of Post-processing in Differential Privacy
    Zhu, Keyu
    Van Hentenryck, Pascal
    Fioretto, Ferdinando
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 11177 - 11184
  • [47] Optimal Distribution of Privacy Budget in Differential Privacy
    Bkakria, Anis
    Tasidou, Aimilia
    Cuppens-Boulahia, Nora
    Cuppens, Frederic
    Bouattour, Fatma
    Ben Fredj, Feten
    RISKS AND SECURITY OF INTERNET AND SYSTEMS, 2019, 11391 : 222 - 236
  • [48] Privacy at Scale: Local Differential Privacy in Practice
    Cormode, Graham
    Jha, Somesh
    Kulkarni, Tejas
    Li, Ninghui
    Srivastava, Divesh
    Wang, Tianhao
    SIGMOD'18: PROCEEDINGS OF THE 2018 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2018, : 1655 - 1658
  • [49] Understanding Risks of Privacy Theater with Differential Privacy
    Smart M.A.
    Sood D.
    Vaccaro K.
    Proceedings of the ACM on Human-Computer Interaction, 2022, 6 (2 CSCW)
  • [50] Differential privacy in deep learning: Privacy and beyond
    Wang, Yanling
    Wang, Qian
    Zhao, Lingchen
    Wang, Cong
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 148 : 408 - 424