GanNoise: Defending against black-box membership inference attacks by countering noise generation

被引:2
|
作者
Liang, Jiaming [1 ]
Huang, Teng [1 ]
Luo, Zidan [1 ]
Li, Dan [1 ]
Li, Yunhao [1 ]
Ding, Ziyu [1 ]
机构
[1] Guangzhou Univ, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
data privacy; deep learning; MIA defense;
D O I
10.1109/DSPP58763.2023.10405019
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In recent years, data privacy in deep learning has seen a notable surge of interest. Pretrained large-scale datadriven models are potential to be attacked risky with membership inference attacks. However, the current corresponding defenses to prevent the leak of data may reduce the performance of pre-trained models. In this paper, we propose a novel training framework called GanNoise that preserves privacy by maintaining the accuracy of classification tasks. Through utilizing adversarial regularization to train a noise generation model, we generate noise that adds randomness to private data during model training, effectively preventing excessive memorization of the actual training data. Our experimental results illustrate the efficacy of the framework against existing attack schemes on various datasets while outperforming advanced MIA defense solutions in terms of efficiency.
引用
收藏
页码:32 / 40
页数:9
相关论文
共 50 条
  • [1] MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
    Jia, Jinyuan
    Salem, Ahmed
    Backes, Michael
    Zhang, Yang
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 259 - 274
  • [2] Black-box membership inference attacks based on shadow model
    Zhen, Han
    Wen’An, Zhou
    Xiaoxuan, Han
    Jie, Wu
    Journal of China Universities of Posts and Telecommunications, 2024, 31 (04): : 1 - 16
  • [3] Black-box membership inference attacks based on shadow model
    Han Zhen
    Zhou Wen'an
    Han Xiaoxuan
    Wu Jie
    TheJournalofChinaUniversitiesofPostsandTelecommunications, 2024, 31 (04) : 1 - 16
  • [4] On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks
    Byun, Junyoung
    Go, Hyojun
    Kim, Changick
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 3819 - 3828
  • [5] Gradient-Leaks: Enabling Black-Box Membership Inference Attacks Against Machine Learning Models
    Liu, Gaoyang
    Xu, Tianlong
    Zhang, Rui
    Wang, Zixiong
    Wang, Chen
    Liu, Ling
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 427 - 440
  • [6] Defending Against Membership Inference Attacks on Beacon Services
    Venkatesaramani, Rajagopal
    Wan, Zhiyu
    Malin, Bradley A.
    Vorobeychik, Yevgeniy
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (03)
  • [7] Defending Against Membership Inference Attacks With High Utility by GAN
    Hu, Li
    Li, Jin
    Lin, Guanbiao
    Peng, Shiyu
    Zhang, Zhenxin
    Zhang, Yingying
    Dong, Changyu
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 2144 - 2157
  • [8] Black-Box Based Limited Query Membership Inference Attack
    Zhang, Yu
    Zhou, Huaping
    Wang, Pengyan
    Yang, Gaoming
    IEEE ACCESS, 2022, 10 : 55459 - 55468
  • [9] Random Noise Defense Against Query-Based Black-Box Attacks
    Qin, Zeyu
    Fan, Yanbo
    Zha, Hongyuan
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] Practical Black-Box Attacks against Machine Learning
    Papernot, Nicolas
    McDaniel, Patrick
    Goodfellow, Ian
    Jha, Somesh
    Celik, Z. Berkay
    Swami, Ananthram
    PROCEEDINGS OF THE 2017 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (ASIA CCS'17), 2017, : 506 - 519