Learning-Based Difficulty Calibration for Enhanced Membership Inference Attacks

被引:0
|
作者
Shi, Haonan [1 ]
Ouyang, Tu [1 ]
Wang, An [1 ]
机构
[1] Case Western Reserve Univ, Cleveland, OH 44106 USA
关键词
D O I
10.1109/EuroSP60621.2024.00012
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning models, in particular deep neural networks, are currently an integral part of various applications, from healthcare to finance. However, using sensitive data to train these models raises concerns about privacy and security. One method that has emerged to verify if the trained models are privacy-preserving is Membership Inference Attacks (MIA), which allows adversaries to determine whether a specific data point was part of a model's training dataset. While a series of MIAs have been proposed in the literature, only a few can achieve high True Positive Rates (TPR) in the low False Positive Rate (FPR) region (0.01% similar to 1%). This is a crucial factor to consider for an MIA to be practically useful in real-world settings. In this paper, we present a novel approach to MIA that is aimed at significantly improving TPR at low FPRs. Our method, named learning-based difficulty calibration for MIA (LDC-MIA), characterizes data records by their hardness levels using a neural network classifier to determine membership. The experiment results show that LDC-MIA can improve TPR at low FPR by up to 4x compared to the other difficulty calibration-based MIAs. It also has the highest Area Under ROC curve (AUC) across all datasets. Our method's cost is comparable with most of the existing MIAs, but is orders of magnitude more efficient than one of the state-of-the-art methods, LiRA, while achieving similar performance.
引用
收藏
页码:62 / 77
页数:16
相关论文
共 50 条
  • [1] On the Difficulty of Membership Inference Attacks
    Rezaei, Shahbaz
    Liu, Xin
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7888 - 7896
  • [2] Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
    Hu, Hongsheng
    Zhang, Xuyun
    Salcic, Zoran
    Sun, Lichao
    Choo, Kim-Kwang Raymond
    Dobbie, Gillian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 3012 - 3029
  • [3] Balancing Privacy and Attack Utility: Calibrating Sample Difficulty for Membership Inference Attacks in Transfer Learning
    Liu, Shuwen
    Qian, Yongfeng
    Hao, Yixue
    2024 54TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS-SUPPLEMENTAL VOLUME, DSN-S 2024, 2024, : 159 - 160
  • [4] Membership Inference Attacks on Machine Learning: A Survey
    Hu, Hongsheng
    Salcic, Zoran
    Sun, Lichao
    Dobbie, Gillian
    Yu, Philip S.
    Zhang, Xuyun
    ACM COMPUTING SURVEYS, 2022, 54 (11S)
  • [5] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    COMPUTERS & SECURITY, 2024, 136
  • [6] Mitigating Membership Inference Attacks in Machine Learning as a Service
    Bouhaddi, Myria
    Adi, Kamel
    2023 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2023, : 262 - 268
  • [7] A Survey on Membership Inference Attacks Against Machine Learning
    Bai, Yang
    Chen, Ting
    Fan, Mingyu
    International Journal of Network Security, 2021, 23 (04) : 685 - 697
  • [8] Poster: Membership Inference Attacks via Contrastive Learning
    Chen, Depeng
    Liu, Xiao
    Cui, Jie
    Zhong, Hong
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 3555 - 3557
  • [9] Membership Inference Attacks Against Machine Learning Models
    Shokri, Reza
    Stronati, Marco
    Song, Congzheng
    Shmatikov, Vitaly
    2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 3 - 18
  • [10] Rethinking Membership Inference Attacks Against Transfer Learning
    Wu, Cong
    Chen, Jing
    Fang, Qianru
    He, Kun
    Zhao, Ziming
    Ren, Hao
    Xu, Guowen
    Liu, Yang
    Xiang, Yang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6441 - 6454