Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo

被引:0
|
作者
Wang, Yu-Xiang [1 ]
Fienberg, Stephen E. [1 ,2 ]
Smola, Alexander J. [1 ,3 ]
机构
[1] Carnegie Mellon Univ, Machine Learning Dept, Pittsburgh, PA 15213 USA
[2] Carnegie Mellon Univ, Dept Stat, Pittsburgh, PA 15213 USA
[3] Marianas Labs Inc, Pittsburgh, PA 15213 USA
基金
新加坡国家研究基金会;
关键词
COMPLEXITY; LANGEVIN;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to "differential privacy", a cryptographic approach to protect individual-level privacy while permitting database-level utility. Specifically, we show that under standard assumptions, getting one sample from a posterior distribution is differentially private "for free"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable. Similarly but separately, we show that a recent line of work that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential private methods on synthetic and real datasets.
引用
收藏
页码:2493 / 2502
页数:10
相关论文
共 50 条
  • [1] Stochastic Gradient Hamiltonian Monte Carlo
    Chen, Tianqi
    Fox, Emily B.
    Guestrin, Carlos
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 1683 - 1691
  • [2] Stochastic Gradient Population Monte Carlo
    El-Laham, Yousef
    Bugallo, Monica F.
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 46 - 50
  • [3] Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization
    Akyildiz, Omer Deniz
    Crisan, Dan
    Miguez, Joaquin
    STATISTICS AND COMPUTING, 2020, 30 (06) : 1645 - 1663
  • [4] Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization
    Ömer Deniz Akyildiz
    Dan Crisan
    Joaquín Míguez
    Statistics and Computing, 2020, 30 : 1645 - 1663
  • [5] Monte Carlo sampling for stochastic weight functions
    Frenkel, Daan
    Schrenk, K. Julian
    Martiniani, Stefano
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (27) : 6924 - 6929
  • [6] Stochastic Gradient Markov Chain Monte Carlo
    Nemeth, Christopher
    Fearnhead, Paul
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2021, 116 (533) : 433 - 450
  • [7] Posterior sampling for Monte Carlo planning under uncertainty
    Aijun Bai
    Feng Wu
    Xiaoping Chen
    Applied Intelligence, 2018, 48 : 4998 - 5018
  • [8] Posterior sampling for Monte Carlo planning under uncertainty
    Bai, Aijun
    Wu, Feng
    Chen, Xiaoping
    APPLIED INTELLIGENCE, 2018, 48 (12) : 4998 - 5018
  • [9] ADAPTIVE MONTE CARLO SAMPLING GRADIENT METHOD FOR OPTIMIZATION
    Tan, Hui
    2017 WINTER SIMULATION CONFERENCE (WSC), 2017, : 4596 - 4597
  • [10] On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo
    Chatterji, Niladri S.
    Flammarion, Nicolas
    Ma, Yi-An
    Bartlett, Peter L.
    Jordan, Michael I.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80