Quantifying Membership Privacy via Information Leakage

被引:17
|
作者
Saeidian, Sara [1 ]
Cervia, Giulia [2 ,3 ]
Oechtering, Tobias J. [1 ]
Skoglund, Mikael [1 ]
机构
[1] KTH Royal Inst Technol, Div Informat Sci & Engn, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
[2] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
[3] Univ Lille, Ctr Digital Syst, IMT Lille Douai, Inst Mines Telecom, F-59000 Lille, France
关键词
Privacy; Differential privacy; Measurement; Training; Machine learning; Data models; Upper bound; Privacy-preserving machine learning; membership inference; maximal leakage; log-concave probability density;
D O I
10.1109/TIFS.2021.3073804
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Machine learning models are known to memorize the unique properties of individual data points in a training set. This memorization capability can be exploited by several types of attacks to infer information about the training data, most notably, membership inference attacks. In this paper, we propose an approach based on information leakage for guaranteeing membership privacy. Specifically, we propose to use a conditional form of the notion of maximal leakage to quantify the information leaking about individual data entries in a dataset, i.e., the entrywise information leakage. We apply our privacy analysis to the Private Aggregation of Teacher Ensembles (PATE) framework for privacy-preserving classification of sensitive data and prove that the entrywise information leakage of its aggregation mechanism is Schur-concave when the injected noise has a log-concave probability density. The Schur-concavity of this leakage implies that increased consensus among teachers in labeling a query reduces its associated privacy cost. Finally, we derive upper bounds on the entrywise information leakage when the aggregation mechanism uses Laplace distributed noise.
引用
收藏
页码:3096 / 3108
页数:13
相关论文
共 50 条
  • [41] Maximal Information Leakage based Privacy Preserving Data Disclosure Mechanisms
    Xiao, Tianrui
    Khisti, Ashish
    2019 16TH CANADIAN WORKSHOP ON INFORMATION THEORY (CWIT), 2019,
  • [42] A Graph Symmetrization Bound on Channel Information Leakage Under Blowfish Privacy
    Edwards, Tobias
    Rubinstein, Benjamin I. P.
    Zhang, Zuhe
    Zhou, Sanming
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (01) : 538 - 548
  • [43] Unexpected Information Leakage of Differential Privacy Due to the Linear Property of Queries
    Huang, Wen
    Zhou, Shijie
    Liao, Yongjian
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 (16) : 3123 - 3137
  • [44] Differential privacy: On the trade-off between utility and information leakage
    INRIA, LIX, Ecole Polytechnique, France
    不详
    Lect. Notes Comput. Sci., (39-54):
  • [45] The Data Modeling Considered Correlation of Information Leakage Detection and Privacy Violation
    Kim, Jinhyung
    Kim, Hyung-jong
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2011, PT II, 2011, 6592 : 392 - 401
  • [46] Quantifying randomness and complexity of a signal via maximum fuzzy membership difference entropy
    Zhang, Tao
    Han, Zhiwu
    Chen, Xiaojuan
    Chen, Wanzhong
    MEASUREMENT, 2021, 174 (174)
  • [47] Protection and Utilization of Privacy Information via Sensing
    Babaguchi, Noboru
    Nakashima, Yuta
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2015, E98D (01) : 2 - 9
  • [48] Inferring privacy information via social relations
    Xu, Wanhong
    Zhou, Xi
    Li, Lei
    2008 IEEE 24TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING WORKSHOP, VOLS 1 AND 2, 2008, : 445 - +
  • [49] Unawareness detection: Discovering black-box malicious models and quantifying privacy leakage risks
    Xu, Jiacheng
    Tan, Chengxiang
    COMPUTERS & SECURITY, 2024, 137
  • [50] Secure Inference via Deep Learning as a Service without Privacy Leakage
    Anh-Tu Tran
    The-Dung Luong
    Cong-Chieu Ha
    Duc-Tho Hoang
    Thi-Luong Tran
    2021 RIVF INTERNATIONAL CONFERENCE ON COMPUTING AND COMMUNICATION TECHNOLOGIES (RIVF 2021), 2021, : 267 - 272