GanNoise: Defending against black-box membership inference attacks by countering noise generation

被引:2
|
作者
Liang, Jiaming [1 ]
Huang, Teng [1 ]
Luo, Zidan [1 ]
Li, Dan [1 ]
Li, Yunhao [1 ]
Ding, Ziyu [1 ]
机构
[1] Guangzhou Univ, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
data privacy; deep learning; MIA defense;
D O I
10.1109/DSPP58763.2023.10405019
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, data privacy in deep learning has seen a notable surge of interest. Pretrained large-scale datadriven models are potential to be attacked risky with membership inference attacks. However, the current corresponding defenses to prevent the leak of data may reduce the performance of pre-trained models. In this paper, we propose a novel training framework called GanNoise that preserves privacy by maintaining the accuracy of classification tasks. Through utilizing adversarial regularization to train a noise generation model, we generate noise that adds randomness to private data during model training, effectively preventing excessive memorization of the actual training data. Our experimental results illustrate the efficacy of the framework against existing attack schemes on various datasets while outperforming advanced MIA defense solutions in terms of efficiency.
引用
收藏
页码:32 / 40
页数:9
相关论文
共 50 条
  • [41] Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
    Co, Kenneth T.
    Munoz-Gonzalez, Luis
    de Maupeou, Sixte
    Lupu, Emil C.
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 275 - 289
  • [42] Black-Box Data Poisoning Attacks on Crowdsourcing
    Chen, Pengpeng
    Yang, Yongqiang
    Yang, Dingqi
    Sun, Hailong
    Chen, Zhijun
    Lin, Peng
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 2975 - 2983
  • [43] Toward Visual Distortion in Black-Box Attacks
    Li, Nannan
    Chen, Zhenzhong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6156 - 6167
  • [44] Provably Efficient Black-Box Action Poisoning Attacks Against Reinforcement Learning
    Liu, Guanlin
    Lai, Lifeng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [45] Resiliency of SNN on Black-Box Adversarial Attacks
    Paudel, Bijay Raj
    Itani, Aashish
    Tragoudas, Spyros
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 799 - 806
  • [46] PRADA: Practical Black-box Adversarial Attacks against Neural Ranking Models
    Wu, Chen
    Zhang, Ruqing
    Guo, Jiafeng
    De Rijke, Maarten
    Fan, Yixing
    Cheng, Xueqi
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (04)
  • [47] Black-Box Attacks against Signed Graph Analysis via Balance Poisoning
    Zhou, Jialong
    Lai, Yuni
    Ren, Jian
    Zhou, Kai
    2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2024, : 530 - 535
  • [48] Adversarial Black-Box Attacks Against Network Intrusion Detection Systems: A Survey
    Alatwi, Huda Ali
    Aldweesh, Amjad
    2021 IEEE WORLD AI IOT CONGRESS (AIIOT), 2021, : 34 - 40
  • [49] SoK: Pitfalls in Evaluating Black-Box Attacks
    Suya, Fnu
    Suri, Anshuman
    Zhang, Tingwei
    Hong, Jingtao
    Tian, Yuan
    Evans, David
    IEEE CONFERENCE ON SAFE AND TRUSTWORTHY MACHINE LEARNING, SATML 2024, 2024, : 387 - 407
  • [50] Beating White-Box Defenses with Black-Box Attacks
    Kumova, Vera
    Pilat, Martin
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,