Black-box membership inference attacks based on shadow model

被引:0
|
作者
Han Zhen
Zhou Wen'an
Han Xiaoxuan
Wu Jie
机构
[1] SchoolofComputerScience,BeijingUniversityofPostsandTelecommunications
关键词
D O I
暂无
中图分类号
TP181 [自动推理、机器学习]; TP309 [安全保密];
学科分类号
081201 ; 0839 ; 1402 ;
摘要
Membership inference attacks on machine learning models have drawn significant attention. While current research primarily utilizes shadow modeling techniques, which require knowledge of the target model and training data, practical scenarios involve black-box access to the target model with no available information. Limited training data further complicate the implementation of these attacks. In this paper, we experimentally compare common data enhancement schemes and propose a data synthesis framework based on the variational autoencoder generative adversarial network(VAE-GAN) to extend the training data for shadow models. Meanwhile, this paper proposes a shadow model training algorithm based on adversarial training to improve the shadow model's ability to mimic the predicted behavior of the target model when the target model's information is unknown. By conducting attack experiments on different models under the black-box access setting, this paper verifies the effectiveness of the VAE-GAN-based data synthesis framework for improving the accuracy of membership inference attack. Furthermore, we verify that the shadow model, trained by using the adversarial training approach, effectively improves the degree of mimicking the predicted behavior of the target model. Compared with existing research methods, the method proposed in this paper achieves a 2% improvement in attack accuracy and delivers better attack performance.
引用
收藏
页码:1 / 16
页数:16
相关论文
共 50 条
  • [21] Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks
    Zhao, Anqi
    Chu, Tong
    Liu, Yahao
    Lie, Wen
    Li, Jingjing
    Duan, Lixin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8153 - 8162
  • [22] Black-Box Adversarial Attacks Against SQL Injection Detection Model
    Alqhtani, Maha
    Alghazzawi, Daniyal
    Alarifi, Suaad
    CONTEMPORARY MATHEMATICS, 2024, 5 (04): : 5098 - 5112
  • [23] Resiliency of SNN on Black-Box Adversarial Attacks
    Paudel, Bijay Raj
    Itani, Aashish
    Tragoudas, Spyros
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 799 - 806
  • [24] SoK: Pitfalls in Evaluating Black-Box Attacks
    Suya, Fnu
    Suri, Anshuman
    Zhang, Tingwei
    Hong, Jingtao
    Tian, Yuan
    Evans, David
    IEEE CONFERENCE ON SAFE AND TRUSTWORTHY MACHINE LEARNING, SATML 2024, 2024, : 387 - 407
  • [25] Deep State Inference: Toward Behavioral Model Inference of Black-Box Software Systems
    Ataiefard, Foozhan
    Mashhadi, Mohammad Jafar
    Hemmati, Hadi
    Walkinshaw, Neil
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2022, 48 (12) : 4857 - 4872
  • [26] Beating White-Box Defenses with Black-Box Attacks
    Kumova, Vera
    Pilat, Martin
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [27] Constructive Membership Testing in Black-Box Classical Groups
    Ambrose, Sophie
    Murray, Scott H.
    Praeger, Cheryl E.
    Schneider, Csaba
    MATHEMATICAL SOFTWARE - ICMS 2010, 2010, 6327 : 54 - +
  • [28] TransMIA: Membership Inference Attacks Using Transfer Shadow Training
    Hidano, Seira
    Murakami, Takao
    Kawamoto, Yusuke
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [29] Improving Black-box Adversarial Attacks with a Transfer-based Prior
    Cheng, Shuyu
    Dong, Yinpeng
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [30] A Black-box Model for Neurons
    Roqueiro, N.
    Claumann, C.
    Guillamon, A.
    Fossas, E.
    2019 IEEE 10TH LATIN AMERICAN SYMPOSIUM ON CIRCUITS & SYSTEMS (LASCAS), 2019, : 129 - 132