GanNoise: Defending against black-box membership inference attacks by countering noise generation

被引:2
|
作者
Liang, Jiaming [1 ]
Huang, Teng [1 ]
Luo, Zidan [1 ]
Li, Dan [1 ]
Li, Yunhao [1 ]
Ding, Ziyu [1 ]
机构
[1] Guangzhou Univ, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
data privacy; deep learning; MIA defense;
D O I
10.1109/DSPP58763.2023.10405019
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, data privacy in deep learning has seen a notable surge of interest. Pretrained large-scale datadriven models are potential to be attacked risky with membership inference attacks. However, the current corresponding defenses to prevent the leak of data may reduce the performance of pre-trained models. In this paper, we propose a novel training framework called GanNoise that preserves privacy by maintaining the accuracy of classification tasks. Through utilizing adversarial regularization to train a noise generation model, we generate noise that adds randomness to private data during model training, effectively preventing excessive memorization of the actual training data. Our experimental results illustrate the efficacy of the framework against existing attack schemes on various datasets while outperforming advanced MIA defense solutions in terms of efficiency.
引用
收藏
页码:32 / 40
页数:9
相关论文
共 50 条
  • [31] An Adaptive Black-box Defense against Trojan Attacks on Text Data
    Alsharadgah, Fatima
    Khreishah, Abdallah
    Al-Ayyoub, Mahmoud
    Jararweh, Yaser
    Liu, Guanxiong
    Khalil, Issa
    Almutiry, Muhannad
    Saeed, Nasir
    2021 EIGHTH INTERNATIONAL CONFERENCE ON SOCIAL NETWORK ANALYSIS, MANAGEMENT AND SECURITY (SNAMS), 2021, : 155 - 162
  • [32] Black-box attacks against log anomaly detection with adversarial examples
    Lu, Siyang
    Wang, Mingquan
    Wang, Dongdong
    Wei, Xiang
    Xiao, Sizhe
    Wang, Zhiwei
    Han, Ningning
    Wang, Liqiang
    INFORMATION SCIENCES, 2023, 619 : 249 - 262
  • [33] Black-Box Adversarial Attacks Against SQL Injection Detection Model
    Alqhtani, Maha
    Alghazzawi, Daniyal
    Alarifi, Suaad
    CONTEMPORARY MATHEMATICS, 2024, 5 (04): : 5098 - 5112
  • [34] Efficient Label Contamination Attacks Against Black-Box Learning Models
    Zhao, Mengchen
    An, Bo
    Gao, Wei
    Zhang, Teng
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3945 - 3951
  • [35] Black-box adversarial attacks against image quality assessment models
    Ran, Yu
    Zhang, Ao-Xiang
    Li, Mingjie
    Tang, Weixuan
    Wang, Yuan-Gen
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 260
  • [36] Black-box Coreset Variational Inference
    Manousakas, Dionysis
    Ritter, Hippolyt
    Karaletsos, Theofanis
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [37] Towards Lightweight Black-Box Attacks Against Deep Neural Networks
    Sun, Chenghao
    Zhang, Yonggang
    Wan Chaoqun
    Wang, Qizhou
    Li, Ya
    Liu, Tongliang
    Han, Bo
    Tian, Xinmei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [38] Ensemble adversarial black-box attacks against deep learning systems
    Hang, Jie
    Han, Keji
    Chen, Hui
    Li, Yun
    PATTERN RECOGNITION, 2020, 101
  • [39] SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher
    Le, Thai
    Park, Noseong
    Lee, Don Gwon
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 6661 - 6674
  • [40] Membership Inference Attacks against MemGuard
    Niu, Ben
    Chen, Yahong
    Zhang, Likun
    Li, Fenghua
    2020 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY (CNS), 2020,