Tight Auditing of Differentially Private Machine Learning

被引:0
|
作者
Nasr, Milad [1 ]
Hayes, Jamie [1 ]
Steinke, Thomas [1 ]
Balle, Borja [1 ]
Tramer, Florian [2 ]
Jagielski, Matthew [1 ]
Carlini, Nicholas [1 ]
Terzis, Andreas [1 ]
机构
[1] Google DeepMind, London, England
[2] ETHZ, Zurich, Switzerland
关键词
INFERENCE ATTACKS; RISK;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Auditing mechanisms for differential privacy use probabilistic means to empirically estimate the privacy level of an algorithm. For private machine learning, existing auditing mechanisms are tight: the empirical privacy estimate (nearly) matches the algorithm's provable privacy guarantee. But these auditing techniques suffer from two limitations. First, they only give tight estimates under implausible worst-case assumptions (e.g., a fully adversarial dataset). Second, they require thousands or millions of training runs to produce non-trivial statistical estimates of the privacy leakage. This work addresses both issues. We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets-if the adversary can see all model updates during training. Prior auditing works rely on the same assumption, which is permitted under the standard differential privacy threat model. This threat model is also applicable, e.g., in federated learning settings. Moreover, our auditing scheme requires only two training runs (instead of thousands) to produce tight privacy estimates, by adapting recent advances in tight composition theorems for differential privacy. We demonstrate the utility of our improved auditing schemes by surfacing implementation bugs in private machine learning code that eluded prior auditing techniques.
引用
收藏
页码:1631 / 1648
页数:18
相关论文
共 50 条
  • [31] Distributionally-robust machine learning using locally differentially-private data
    Farokhi, Farhad
    OPTIMIZATION LETTERS, 2022, 16 (04) : 1167 - 1179
  • [32] Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning
    Farokhi, Farhad
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 1695 - 1700
  • [33] Distributionally-robust machine learning using locally differentially-private data
    Farhad Farokhi
    Optimization Letters, 2022, 16 : 1167 - 1179
  • [34] A Practical Differentially Private Support Vector Machine
    Xu, Feifei
    Peng, Jia
    Xiang, Ji
    Zha, Daren
    2019 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI 2019), 2019, : 1237 - 1242
  • [35] Differentially Private Hypothesis Transfer Learning
    Wang, Yang
    Gu, Quanquan
    Brown, Donald
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT II, 2019, 11052 : 811 - 826
  • [36] DIFFERENTIALLY PRIVATE LEARNING OF GEOMETRIC CONCEPTS
    Kaplan H.
    Mansour Y.
    Matias Y.
    Stemmer U.
    SIAM Journal on Optimization, 2022, 32 (03) : 952 - 974
  • [37] Stochastic Differentially Private and Fair Learning
    Lowy, Andrew
    Gupta, Devansh
    Razaviyayn, Meisam
    WORKSHOP ON ALGORITHMIC FAIRNESS THROUGH THE LENS OF CAUSALITY AND PRIVACY, VOL 214, 2022, 214 : 86 - 119
  • [38] Differentially Private Learning of Geometric Concepts
    Kaplan, Haim
    Mansour, Yishay
    Matias, Yossi
    Stemmer, Uri
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [39] Differentially private distributed estimation and learning
    Papachristou, Marios
    Rahimian, M. Amin
    IISE TRANSACTIONS, 2024,
  • [40] Differentially Private Distributed Online Learning
    Li, Chencheng
    Zhou, Pan
    Xiong, Li
    Wang, Qian
    Wang, Ting
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2018, 30 (08) : 1440 - 1453