Privacy Auditing in Differential Private Machine Learning: The Current Trends

被引:0
|
作者
Namatevs, Ivars [1 ]
Sudars, Kaspars [1 ]
Nikulins, Arturs [1 ]
Ozols, Kaspars [1 ]
机构
[1] Inst Elect & Comp Sci, 14 Dzerbenes St, LV-1006 Riga, Latvia
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 02期
关键词
differential privacy; differential private machine learning; differential privacy auditing; privacy attacks; MEMBERSHIP INFERENCE ATTACKS; INFORMATION LEAKAGE; REGRESSION-MODELS; INVERSION; INTERVALS; MECHANISM; SECURITY; BOX;
D O I
10.3390/app15020647
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Differential privacy has recently gained prominence, especially in the context of private machine learning. While the definition of differential privacy makes it possible to provably limit the amount of information leaked by an algorithm, practical implementations of differentially private algorithms often contain subtle vulnerabilities. Therefore, there is a need for effective methods that can audit (& varepsilon;,delta) differentially private algorithms before they are deployed in the real world. The article examines studies that recommend privacy guarantees for differential private machine learning. It covers a wide range of topics on the subject and provides comprehensive guidance for privacy auditing schemes based on privacy attacks to protect machine-learning models from privacy leakage. Our results contribute to the growing literature on differential privacy in the realm of privacy auditing and beyond and pave the way for future research in the field of privacy-preserving models.
引用
收藏
页数:54
相关论文
共 50 条
  • [1] Tight Auditing of Differentially Private Machine Learning
    Nasr, Milad
    Hayes, Jamie
    Steinke, Thomas
    Balle, Borja
    Tramer, Florian
    Jagielski, Matthew
    Carlini, Nicholas
    Terzis, Andreas
    PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 1631 - 1648
  • [2] A General Framework for Auditing Differentially Private Machine Learning
    Lu, Fred
    Munoz, Joseph
    Fuchs, Maya
    LeBlond, Tyler
    Zaresky-Williams, Elliott
    Raff, Edward
    Ferraro, Francis
    Testa, Brian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Quantum machine learning with differential privacy
    William M. Watkins
    Samuel Yen-Chi Chen
    Shinjae Yoo
    Scientific Reports, 13
  • [4] Quantum machine learning with differential privacy
    Watkins, William M.
    Chen, Samuel Yen-Chi
    Yoo, Shinjae
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [5] Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning?
    Zheng, Huadi
    Hu, Haibo
    Han, Ziyang
    IEEE INTELLIGENT SYSTEMS, 2020, 35 (04) : 5 - 14
  • [6] Renyi Differential Privacy of Propose-Test-Release and Applications to Private and Robust Machine Learning
    Wang, Jiachen T.
    Mahloujifar, Saeed
    Wang, Shouda
    Jia, Ruoxi
    Mittal, Prateek
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [7] How Differential Privacy Reinforces Privacy of Machine Learning Models?
    Ben Hamida, Sana
    Mrabet, Hichem
    Jemai, Abderrazak
    ADVANCES IN COMPUTATIONAL COLLECTIVE INTELLIGENCE, ICCCI 2022, 2022, 1653 : 661 - 673
  • [8] Group and Attack: Auditing Differential Privacy
    Lokna, Johan
    Paradis, Anouk
    Dimitrov, Dimitar I.
    Vechev, Martin
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 1905 - 1918
  • [9] Deep Learning and Current Trends in Machine Learning
    Bostan, Atila
    Sengul, Gokhan
    Tirkes, Guzin
    Ekin, Cansu
    Karakaya, Murat
    2018 3RD INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND ENGINEERING (UBMK), 2018, : 467 - 470
  • [10] A Marauder's Map of Security and Privacy in Machine Learning An overview of current and future research directions for making machine learning secure and private
    Papernot, Nicolas
    AISEC'18: PROCEEDINGS OF THE 11TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, 2018, : 1 - 1