Tight Auditing of Differentially Private Machine Learning

被引:0
|
作者
Nasr, Milad [1 ]
Hayes, Jamie [1 ]
Steinke, Thomas [1 ]
Balle, Borja [1 ]
Tramer, Florian [2 ]
Jagielski, Matthew [1 ]
Carlini, Nicholas [1 ]
Terzis, Andreas [1 ]
机构
[1] Google DeepMind, London, England
[2] ETHZ, Zurich, Switzerland
关键词
INFERENCE ATTACKS; RISK;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Auditing mechanisms for differential privacy use probabilistic means to empirically estimate the privacy level of an algorithm. For private machine learning, existing auditing mechanisms are tight: the empirical privacy estimate (nearly) matches the algorithm's provable privacy guarantee. But these auditing techniques suffer from two limitations. First, they only give tight estimates under implausible worst-case assumptions (e.g., a fully adversarial dataset). Second, they require thousands or millions of training runs to produce non-trivial statistical estimates of the privacy leakage. This work addresses both issues. We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets-if the adversary can see all model updates during training. Prior auditing works rely on the same assumption, which is permitted under the standard differential privacy threat model. This threat model is also applicable, e.g., in federated learning settings. Moreover, our auditing scheme requires only two training runs (instead of thousands) to produce tight privacy estimates, by adapting recent advances in tight composition theorems for differential privacy. We demonstrate the utility of our improved auditing schemes by surfacing implementation bugs in private machine learning code that eluded prior auditing techniques.
引用
收藏
页码:1631 / 1648
页数:18
相关论文
共 50 条
  • [1] A General Framework for Auditing Differentially Private Machine Learning
    Lu, Fred
    Munoz, Joseph
    Fuchs, Maya
    LeBlond, Tyler
    Zaresky-Williams, Elliott
    Raff, Edward
    Ferraro, Francis
    Testa, Brian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [2] Differentially Private Extreme Learning Machine
    Ono, Hajime
    Tran Thi Phuong
    Le Trieu Phong
    MODELING DECISIONS FOR ARTIFICIAL INTELLIGENCE, MDAI 2024, 2024, 14986 : 165 - 176
  • [3] DiVa: An Accelerator for Differentially Private Machine Learning
    Park, Beomsik
    Hwang, Ranggi
    Yoon, Dongho
    Choi, Yoonhyuk
    Rhu, Minsoo
    2022 55TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2022, : 1200 - 1217
  • [4] Differentially Private ADMM Algorithms for Machine Learning
    Shang, Fanhua
    Xu, Tao
    Liu, Yuanyuan
    Liu, Hongying
    Shen, Longjie
    Gong, Maoguo
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 4733 - 4745
  • [5] Evaluating Differentially Private Machine Learning in Practice
    Jayaraman, Bargav
    Evans, David
    PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, 2019, : 1895 - 1912
  • [6] Privacy Auditing in Differential Private Machine Learning: The Current Trends
    Namatevs, Ivars
    Sudars, Kaspars
    Nikulins, Arturs
    Ozols, Kaspars
    APPLIED SCIENCES-BASEL, 2025, 15 (02):
  • [7] DPMLBench: Holistic Evaluation of Differentially Private Machine Learning
    Wei, Chengkun
    Zhao, Minghu
    Zhang, Zhikun
    Chen, Min
    Meng, Wenlong
    Liu, Bo
    Fan, Yuan
    Chen, Wenzhi
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 2621 - +
  • [8] A Survey on Differentially Private Machine Learning [Review Article]
    Gong, Maoguo
    Xie, Yu
    Pan, Ke
    Feng, Kaiyuan
    Qin, A. K.
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2020, 15 (02) : 49 - 88
  • [9] Differentially Private Robust ADMM for Distributed Machine Learning
    Ding, Jiahao
    Zhang, Xinyue
    Chen, Mingsong
    Xue, Kaiping
    Zhang, Chi
    Pan, Miao
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 1302 - 1311
  • [10] Differentially Private and Fair Machine Learning: A Benchmark Study
    Eponeshnikov, Alexander
    Bakhtadze, Natalia
    Smirnova, Gulnara
    Sabitov, Rustem
    Sabitov, Shamil
    IFAC PAPERSONLINE, 2024, 58 (19): : 277 - 282