GanNoise: Defending against black-box membership inference attacks by countering noise generation

被引:2
|
作者
Liang, Jiaming [1 ]
Huang, Teng [1 ]
Luo, Zidan [1 ]
Li, Dan [1 ]
Li, Yunhao [1 ]
Ding, Ziyu [1 ]
机构
[1] Guangzhou Univ, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
data privacy; deep learning; MIA defense;
D O I
10.1109/DSPP58763.2023.10405019
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, data privacy in deep learning has seen a notable surge of interest. Pretrained large-scale datadriven models are potential to be attacked risky with membership inference attacks. However, the current corresponding defenses to prevent the leak of data may reduce the performance of pre-trained models. In this paper, we propose a novel training framework called GanNoise that preserves privacy by maintaining the accuracy of classification tasks. Through utilizing adversarial regularization to train a noise generation model, we generate noise that adds randomness to private data during model training, effectively preventing excessive memorization of the actual training data. Our experimental results illustrate the efficacy of the framework against existing attack schemes on various datasets while outperforming advanced MIA defense solutions in terms of efficiency.
引用
收藏
页码:32 / 40
页数:9
相关论文
共 50 条
  • [21] Orthogonal Deep Models as Defense Against Black-Box Attacks
    Jalwana, Mohammad A. A. K.
    Akhtar, Naveed
    Bennamoun, Mohammed
    Mian, Ajmal
    IEEE ACCESS, 2020, 8 : 119744 - 119757
  • [22] Defending Black Box Facial Recognition Classifiers Against Adversarial Attacks
    Theagarajan, Rajkumar
    Bhanu, Bir
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3537 - 3547
  • [23] Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation
    Aithal, Manjushree B.
    Li, Xiaohua
    IEEE ACCESS, 2022, 10 : 12395 - 12411
  • [24] Understanding and defending against White-box membership inference attack in deep learning
    Wu, Di
    Qi, Saiyu
    Qi, Yong
    Li, Qian
    Cai, Bowen
    Guo, Qi
    Cheng, Jingxian
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [25] Defending Against Membership Inference Attack by Shielding Membership Signals
    Miao, Yinbin
    Yu, Yueming
    Li, Xinghua
    Guo, Yu
    Liu, Ximeng
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (06) : 4087 - 4101
  • [26] Simple Black-box Adversarial Attacks
    Guo, Chuan
    Gardner, Jacob R.
    You, Yurong
    Wilson, Andrew Gordon
    Weinberger, Kilian Q.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [27] BAN-MPR: Defending against Membership Inference Attacks with Born Again Networks and Membership Privacy Regularization
    Liu, Yiqing
    Yu, Juan
    Han, Jianmin
    2022 INTERNATIONAL CONFERENCE ON COMPUTERS AND ARTIFICIAL INTELLIGENCE TECHNOLOGIES, CAIT, 2022, : 9 - 15
  • [28] Semantics aware adversarial malware examples generation for black-box attacks
    Peng, Xiaowei
    Xian, Hequn
    Lu, Qian
    Lu, Xiuqing
    APPLIED SOFT COMPUTING, 2021, 109
  • [29] MC-Net: Realistic Sample Generation for Black-Box Attacks
    Duan, Mingxing
    Jiao, Kailun
    Yu, Siyang
    Yang, Zhibang
    Xiao, Bin
    Li, Kenli
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3008 - 3022
  • [30] On the Convergence of Black-Box Variational Inference
    Kim, Kyurae
    Oh, Jisu
    Wu, Kaiwen
    Ma, Yi-An
    Gardner, Jacob R.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,