EAR: An Enhanced Adversarial Regularization Approach against Membership Inference Attacks

被引:2
|
作者
Hu, Hongsheng [1 ]
Salcic, Zoran [1 ]
Dobbie, Gillian [2 ]
Chen, Yi [3 ]
Zhang, Xuyun [4 ]
机构
[1] Univ Auckland, Dept ECE, Auckland, New Zealand
[2] Univ Auckland, Sch Comp Sci, Auckland, New Zealand
[3] Southwest Jiaotong Univ, Sch Informat Sci & Technol, Chengdu, Peoples R China
[4] Macquarie Univ, Dept Comp, Sydney, NSW, Australia
关键词
Data privacy; Membership inference attacks; Adversarial regularization; Machine learning;
D O I
10.1109/IJCNN52387.2021.9534381
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Membership inference attacks on a machine learning model aim to determine whether a given data record is a member of the training set. They pose severe privacy risks to individuals, e.g., identifying an individual's participation in a hospital's health analytic training set reveals that this individual was once a patient in that hospital. Adversarial regularization (AR) is one of the state-of-the-art defense methods that mitigate such attacks while preserving a model's prediction accuracy. AR adds membership inference attacks as a new regularization term to the target model during the training process. It is an adversarial training algorithm that is trained on a defended model which is essentially the same as training the generator of generative adversarial networks (GANs). We observe that many GAN variants are able to generate higher quality samples and offer more stability during the training phase than GANs. However, whether these GAN variants are available to improve the effectiveness of AR has not been investigated. In this paper, we propose an enhanced adversarial regularization (EAR) method based on Least Square GANs (LSGANs). The new EAR surpasses the existing AR in offering more powerful defensive ability while preserving the same prediction accuracy of the protected classifiers. We systematically evaluate EAR on five datasets with different target classifiers under four different attack methods and compare it with four other defense methods. We experimentally show that our new method performs the best among other defense methods.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Defending against Membership Inference Attacks in Federated learning via Adversarial Example
    Xie, Yuanyuan
    Chen, Bing
    Zhang, Jiale
    Wu, Di
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 153 - 160
  • [2] Synthetic data for enhanced privacy: A VAE-GAN approach against membership inference attacks
    Yan, Jian'en
    Huang, Haihui
    Yang, Kairan
    Xu, Haiyan
    Li, Yanling
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [3] Membership Inference Attacks against MemGuard
    Niu, Ben
    Chen, Yahong
    Zhang, Likun
    Li, Fenghua
    2020 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY (CNS), 2020,
  • [4] Deepmarking: Leveraging Adversarial Noise for Membership Inference Attacks
    Jelstrup, Malthe Andreas Lejbolle
    Bigdeli, Siavash Arjomand
    2024 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY, ICCP 2024, 2024,
  • [5] BAN-MPR: Defending against Membership Inference Attacks with Born Again Networks and Membership Privacy Regularization
    Liu, Yiqing
    Yu, Juan
    Han, Jianmin
    2022 INTERNATIONAL CONFERENCE ON COMPUTERS AND ARTIFICIAL INTELLIGENCE TECHNOLOGIES, CAIT, 2022, : 9 - 15
  • [6] TEAR: Exploring Temporal Evolution of Adversarial Robustness for Membership Inference Attacks Against Federated Learning
    Liu, Gaoyang
    Tian, Zehao
    Chen, Jian
    Wang, Chen
    Liu, Jiangchuan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4996 - 5010
  • [7] PAR-GAN: Improving the Generalization of Generative Adversarial Networks Against Membership Inference Attacks
    Chen, Junjie
    Wang, Wendy Hui
    Gao, Hongchang
    Shi, Xinghua
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 127 - 137
  • [8] Membership Inference Attacks Against the Graph Classification
    Yang, Junze
    Li, Hongwei
    Fan, Wenshu
    Zhang, Xilin
    Hao, Meng
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6729 - 6734
  • [9] MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
    Jia, Jinyuan
    Salem, Ahmed
    Backes, Michael
    Zhang, Yang
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 259 - 274
  • [10] Membership Inference Attacks against Diffusion Models
    Matsumoto, Tomoya
    Miura, Takayuki
    Yanai, Naoto
    2023 IEEE SECURITY AND PRIVACY WORKSHOPS, SPW, 2023, : 77 - 83