Privacy Auditing in Differential Private Machine Learning: The Current Trends

被引:0
|
作者
Namatevs, Ivars [1 ]
Sudars, Kaspars [1 ]
Nikulins, Arturs [1 ]
Ozols, Kaspars [1 ]
机构
[1] Inst Elect & Comp Sci, 14 Dzerbenes St, LV-1006 Riga, Latvia
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 02期
关键词
differential privacy; differential private machine learning; differential privacy auditing; privacy attacks; MEMBERSHIP INFERENCE ATTACKS; INFORMATION LEAKAGE; REGRESSION-MODELS; INVERSION; INTERVALS; MECHANISM; SECURITY; BOX;
D O I
10.3390/app15020647
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Differential privacy has recently gained prominence, especially in the context of private machine learning. While the definition of differential privacy makes it possible to provably limit the amount of information leaked by an algorithm, practical implementations of differentially private algorithms often contain subtle vulnerabilities. Therefore, there is a need for effective methods that can audit (& varepsilon;,delta) differentially private algorithms before they are deployed in the real world. The article examines studies that recommend privacy guarantees for differential private machine learning. It covers a wide range of topics on the subject and provides comprehensive guidance for privacy auditing schemes based on privacy attacks to protect machine-learning models from privacy leakage. Our results contribute to the growing literature on differential privacy in the realm of privacy auditing and beyond and pave the way for future research in the field of privacy-preserving models.
引用
收藏
页数:54
相关论文
共 50 条
  • [21] Differentially Private Image Classification Using Support Vector Machine and Differential Privacy
    Senekane, Makhamisa
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2019, 1 (01): : 483 - 491
  • [22] Private Learning and Sanitization: Pure vs. Approximate Differential Privacy
    Beimel, Amos
    Nissim, Kobbi
    Stemmer, Uri
    THEORY OF COMPUTING, 2016, 12
  • [23] Auditing privacy budget of differentially private neural network models
    Huang, Wen
    Zhang, Zhishuo
    Zhao, Weixin
    Peng, Jian
    Xu, Wenzheng
    Liao, Yongjian
    Zhou, Shijie
    Wang, Ziming
    NEUROCOMPUTING, 2025, 614
  • [24] Correlated Differential Privacy of Multiparty Data Release in Machine Learning
    Zhao, Jian-Zhe
    Wang, Xing-Wei
    Mao, Ke-Ming
    Huang, Chen-Xi
    Su, Yu-Kai
    Li, Yu-Chen
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2022, 37 (01) : 231 - 251
  • [25] Improved Differential Privacy Noise Mechanism in Quantum Machine Learning
    Yang, Hang
    Li, Xunbo
    Liu, Zhigui
    Pedrycz, Witold
    IEEE ACCESS, 2023, 11 : 50157 - 50164
  • [26] Towards A Differential Privacy and Utility Preserving Machine Learning Classifier
    Mivule, Kato
    Turner, Claude
    Ji, Soo-Yeon
    COMPLEX ADAPTIVE SYSTEMS 2012, 2012, 12 : 176 - 181
  • [27] A Critical Review on the Use (and Misuse) of Differential Privacy in Machine Learning
    Blanco-Justicia, Alberto
    Sanchez, David
    Domingo-Ferrer, Josep
    Muralidhar, Krishnamurty
    ACM COMPUTING SURVEYS, 2023, 55 (08)
  • [28] Stochastic ADMM Based Distributed Machine Learning with Differential Privacy
    Ding, Jiahao
    Errapotu, Sai Mounika
    Zhang, Haijun
    Gong, Yanmin
    Pan, Miao
    Han, Zhu
    SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, SECURECOMM, PT I, 2019, 304 : 257 - 277
  • [29] Data release for machine learning via correlated differential privacy
    Shen, Hua
    Li, Jiqiang
    Wu, Ge
    Zhang, Mingwu
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
  • [30] Correlated Differential Privacy of Multiparty Data Release in Machine Learning
    Jian-Zhe Zhao
    Xing-Wei Wang
    Ke-Ming Mao
    Chen-Xi Huang
    Yu-Kai Su
    Yu-Chen Li
    Journal of Computer Science and Technology, 2022, 37 : 231 - 251