Learning-Based Difficulty Calibration for Enhanced Membership Inference Attacks

被引:0
|
作者
Shi, Haonan [1 ]
Ouyang, Tu [1 ]
Wang, An [1 ]
机构
[1] Case Western Reserve Univ, Cleveland, OH 44106 USA
关键词
D O I
10.1109/EuroSP60621.2024.00012
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning models, in particular deep neural networks, are currently an integral part of various applications, from healthcare to finance. However, using sensitive data to train these models raises concerns about privacy and security. One method that has emerged to verify if the trained models are privacy-preserving is Membership Inference Attacks (MIA), which allows adversaries to determine whether a specific data point was part of a model's training dataset. While a series of MIAs have been proposed in the literature, only a few can achieve high True Positive Rates (TPR) in the low False Positive Rate (FPR) region (0.01% similar to 1%). This is a crucial factor to consider for an MIA to be practically useful in real-world settings. In this paper, we present a novel approach to MIA that is aimed at significantly improving TPR at low FPRs. Our method, named learning-based difficulty calibration for MIA (LDC-MIA), characterizes data records by their hardness levels using a neural network classifier to determine membership. The experiment results show that LDC-MIA can improve TPR at low FPR by up to 4x compared to the other difficulty calibration-based MIAs. It also has the highest Area Under ROC curve (AUC) across all datasets. Our method's cost is comparable with most of the existing MIAs, but is orders of magnitude more efficient than one of the state-of-the-art methods, LiRA, while achieving similar performance.
引用
收藏
页码:62 / 77
页数:16
相关论文
共 50 条
  • [41] Black-box membership inference attacks based on shadow model
    Han Zhen
    Zhou Wen'an
    Han Xiaoxuan
    Wu Jie
    TheJournalofChinaUniversitiesofPostsandTelecommunications, 2024, 31 (04) : 1 - 16
  • [42] Black-box membership inference attacks based on shadow model
    Zhen, Han
    Wen’An, Zhou
    Xiaoxuan, Han
    Jie, Wu
    Journal of China Universities of Posts and Telecommunications, 2024, 31 (04): : 1 - 16
  • [43] Advancing membership inference attacks:The present and the future
    Zheng Li
    Yang Zhang
    Security and Safety, 2025, 4 (01) : 6 - 9
  • [44] Do Backdoors Assist Membership Inference Attacks?
    Goto, Yumeki
    Ashizawa, Nami
    Shibahara, Toshiki
    Yanai, Naoto
    SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, PT II, SECURECOMM 2023, 2025, 568 : 251 - 265
  • [45] Membership Inference Attacks Against the Graph Classification
    Yang, Junze
    Li, Hongwei
    Fan, Wenshu
    Zhang, Xilin
    Hao, Meng
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6729 - 6734
  • [46] Membership Inference Attacks are Easier on Difficult Problems
    Shafran, Avital
    Peleg, Shmuel
    Hoshen, Yedid
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14800 - 14809
  • [47] Detection of Membership Inference Attacks on GAN Models
    Ekramifard, Ala
    Amintoosi, Haleh
    Seno, Seyed Amin Hosseini
    ISECURE-ISC INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2025, 17 (01): : 43 - 57
  • [48] Label-Only Membership Inference Attacks
    Choquette-Choo, Christopher A.
    Tramer, Florian
    Carlini, Nicholas
    Papernot, Nicolas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [49] Membership Inference Attacks and Generalization: A Causal Perspective
    Baluta, Teodora
    Shen, Shiqi
    Hitarth, S.
    Tople, Shruti
    Saxena, Prateek
    PROCEEDINGS OF THE 2022 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2022, 2022, : 249 - 262
  • [50] Membership Inference Attacks and Defenses in Classification Models
    Li, Jiacheng
    Li, Ninghui
    Ribeiro, Bruno
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY (CODASPY '21), 2021, : 5 - 16