Improving the Privacy and Practicality of Objective Perturbation for Differentially Private Linear Learners

被引:0
|
作者
Redberg, Rachel [1 ]
Koskela, Antti [2 ]
Wang, Yu-Xiang [1 ]
机构
[1] UC Santa Barbara, Santa Barbara, CA 93106 USA
[2] Nokia Bell Labs, Helsinki, Finland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the arena of privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) has outstripped the objective perturbation mechanism in popularity and interest. Though unrivaled in versatility, DP-SGD requires a non-trivial privacy overhead (for privately tuning the model's hyperparameters) and a computational complexity which might be extravagant for simple models such as linear and logistic regression. This paper revamps the objective perturbation mechanism with tighter privacy analyses and new computational tools that boost it to perform competitively with DP-SGD on unconstrained convex generalized linear problems.
引用
收藏
页数:35
相关论文
共 50 条
  • [31] Differentially Private Bayesian Neural Networks on Accuracy, Privacy and Reliability
    Zhang, Qiyiwen
    Bu, Zhiqi
    Chen, Kan
    Long, Qi
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT IV, 2023, 13716 : 604 - 619
  • [32] The Cost of Privacy in Asynchronous Differentially-Private Machine Learning
    Farokhi, Farhad
    Wu, Nan
    Smith, David
    Kaafar, Mohamed Ali
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 (16) : 2118 - 2129
  • [33] Differentially Private Hypothesis Testing for Linear Regression
    Alabi, Daniel G.
    Vadhan, Salil P.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [34] Hypothesis Testing for Differentially Private Linear Regression
    Alabi, Daniel
    Vadhan, Salil
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [35] Differentially Private Generalized Linear Models Revisited
    Arora, Raman
    Bassily, Raef
    Guzman, Cristobal
    Menart, Michael
    Ullah, Enayat
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [36] Location Privacy via Differential Private Perturbation of Cloaking Area
    Ngo, Hoa
    Kim, Jong
    2015 IEEE 28TH COMPUTER SECURITY FOUNDATIONS SYMPOSIUM CSF 2015, 2015, : 63 - 74
  • [37] Private Linear Transformation: The Joint Privacy Case
    Esmati, Nahid
    Heidarzadeh, Anoosheh
    Sprintson, Alex
    2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2021, : 2125 - 2130
  • [38] Private Linear Transformation: The Individual Privacy Case
    Esmati, Nahid
    Heidarzadeh, Anoosheh
    Sprintson, Alex
    2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2021, : 2131 - 2136
  • [39] Data Poisoning against Differentially-Private Learners: Attacks and Defenses
    Ma, Yuzhe
    Zhu, Xiaojin
    Hsu, Justin
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4732 - 4738
  • [40] Improving the Number of Queries Supported by Differentially Private Mechanisms
    Huang, Wen
    Zhou, Shijie
    Liao, Yongjian
    2021 IEEE 15TH INTERNATIONAL CONFERENCE ON BIG DATA SCIENCE AND ENGINEERING (BIGDATASE 2021), 2021, : 65 - 71