Tight Auditing of Differentially Private Machine Learning

被引:0
|
作者
Nasr, Milad [1 ]
Hayes, Jamie [1 ]
Steinke, Thomas [1 ]
Balle, Borja [1 ]
Tramer, Florian [2 ]
Jagielski, Matthew [1 ]
Carlini, Nicholas [1 ]
Terzis, Andreas [1 ]
机构
[1] Google DeepMind, London, England
[2] ETHZ, Zurich, Switzerland
关键词
INFERENCE ATTACKS; RISK;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Auditing mechanisms for differential privacy use probabilistic means to empirically estimate the privacy level of an algorithm. For private machine learning, existing auditing mechanisms are tight: the empirical privacy estimate (nearly) matches the algorithm's provable privacy guarantee. But these auditing techniques suffer from two limitations. First, they only give tight estimates under implausible worst-case assumptions (e.g., a fully adversarial dataset). Second, they require thousands or millions of training runs to produce non-trivial statistical estimates of the privacy leakage. This work addresses both issues. We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets-if the adversary can see all model updates during training. Prior auditing works rely on the same assumption, which is permitted under the standard differential privacy threat model. This threat model is also applicable, e.g., in federated learning settings. Moreover, our auditing scheme requires only two training runs (instead of thousands) to produce tight privacy estimates, by adapting recent advances in tight composition theorems for differential privacy. We demonstrate the utility of our improved auditing schemes by surfacing implementation bugs in private machine learning code that eluded prior auditing techniques.
引用
收藏
页码:1631 / 1648
页数:18
相关论文
共 50 条
  • [21] Auditing privacy budget of differentially private neural network models
    Huang, Wen
    Zhang, Zhishuo
    Zhao, Weixin
    Peng, Jian
    Xu, Wenzheng
    Liao, Yongjian
    Zhou, Shijie
    Wang, Ziming
    NEUROCOMPUTING, 2025, 614
  • [22] Nearly Tight Bounds For Differentially Private Multiway Cut
    Dalirrooyfard, Mina
    Mitrovic, Slobodan
    Nevmyvaka, Yuriy
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [23] Differentially Private Distributed Learning
    Zhou, Yaqin
    Tang, Shaojie
    INFORMS JOURNAL ON COMPUTING, 2020, 32 (03) : 779 - 789
  • [24] Differentially Private Fair Learning
    Jagielski, Matthew
    Kearns, Michael
    Mao, Jieming
    Oprea, Alina
    Roth, Aaron
    Sharifi-Malvajerdi, Saeed
    Ullman, Jonathan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [25] Differentially Private Reinforcement Learning
    Ma, Pingchuan
    Wang, Zhiqiang
    Zhang, Le
    Wang, Ruming
    Zou, Xiaoxiang
    Yang, Tao
    INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2019), 2020, 11999 : 668 - 683
  • [26] Introducing Machine Learning in Auditing Courses
    Huang, Feiqi
    Wang, Yunsen
    JOURNAL OF EMERGING TECHNOLOGIES IN ACCOUNTING, 2023, 20 (01) : 195 - 211
  • [27] Towards Automated Auditing with Machine Learning
    Sifa, Rafet
    Ladi, Anna
    Pielka, Maren
    Ramamurthy, Rajkumar
    Hillebrand, Lars
    Kirsch, Birgit
    Biesner, David
    Stenzel, Robin
    Bell, Thiago
    Luebbering, Max
    Nuetten, Ulrich
    Bauckhage, Christian
    Warning, Ulrich
    Fuerst, Benedikt
    Khameneh, Tim Dilmaghani
    Thom, Daniel
    Huseynov, Ilgar
    Kahlert, Roland
    Schlums, Jennifer
    Ismail, Hisham
    Kliem, Bernd
    Loitz, Ruediger
    DOCENG'19: PROCEEDINGS OF THE ACM SYMPOSIUM ON DOCUMENT ENGINEERING 2019, 2019,
  • [28] Learning Rate Adaptation for Differentially Private Learning
    Koskela, Antti
    Honkela, Antti
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2465 - 2474
  • [29] Almost Tight Error Bounds on Differentially Private Continual Counting
    Henzinger, Monika
    Upadhyay, Jalaj
    Upadhyay, Sarvagya
    PROCEEDINGS OF THE 2023 ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, SODA, 2023, : 5003 - 5039
  • [30] BDPL: A Boundary Differentially Private Layer Against Machine Learning Model Extraction Attacks
    Zheng, Huadi
    Ye, Qingqing
    Hu, Haibo
    Fang, Chengfang
    Shi, Jie
    COMPUTER SECURITY - ESORICS 2019, PT I, 2019, 11735 : 66 - 83