Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks

被引:4
|
作者
Ben Hamida, Sana [1 ,2 ,3 ]
Mrabet, Hichem [3 ,4 ]
Chaieb, Faten [5 ]
Jemai, Abderrazak [4 ,6 ]
机构
[1] Gen Directorate Technol Studies, Higher Inst Technol Studies Gabes, STIC, Rades 2098, Tunisia
[2] Gabes Univ, Natl Engn Sch Gabes, Res Team Intelligent Machines, Gabes 6072, Tunisia
[3] Univ Tunis El Manar, FST, Tunis 2092, Tunisia
[4] Carthage Univ, Tunisia Polytech Sch, SERCOM Lab, La Marsa 2078, Tunisia
[5] Paris Pantheon Assas Univ, Efrei Res Lab, Paris, France
[6] Ctr Urbain Nord, INSAT, BP 676, Tunis 1080, Tunisia
关键词
Machine learning; Privacy; Membership inference attack; Dropout; L2; regularization; Differential privacy; Data augmentation; Conventional neural network (CNN);
D O I
10.1007/s11042-023-17394-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) has revolutionized various industries, but concerns about privacy and security have emerged as significant challenges. Membership inference attacks (MIAs) pose a serious threat by attempting to determine whenever a specific data record was used to train a ML model. In this study, we evaluate three defense strategies against MIAs: data augmentation (DA), dropout with L2 regularization, and differential privacy (DP). Through experiments, we assess the effectiveness of these techniques in mitigating the success of MIAs while maintaining acceptable model accuracy. Our findings demonstrate that DA not only improves model accuracy but also enhances privacy protection. The dropout and L2 regularization approach effectively reduces the impact of MIAs without compromising accuracy. However, adopting DP introduces a trade-off, as it limits MIA influence but affects model accuracy. Our DA defense strategy, for instance, show promising results, with privacy improvements of 12.97%, 15.82%, and 10.28% for the MNIST, CIFAR-10, and CIFAR-100 datasets, respectively. These insights contribute to the growing field of privacy protection in ML and highlight the significance of safeguarding sensitive data. Further research is needed to advance privacy-preserving techniques and address the evolving landscape of ML security.
引用
收藏
页码:44455 / 44484
页数:30
相关论文
共 50 条
  • [1] Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks
    Sana Ben Hamida
    Hichem Mrabet
    Faten Chaieb
    Abderrazak Jemai
    Multimedia Tools and Applications, 2024, 83 : 44455 - 44484
  • [2] BAN-MPR: Defending against Membership Inference Attacks with Born Again Networks and Membership Privacy Regularization
    Liu, Yiqing
    Yu, Juan
    Han, Jianmin
    2022 INTERNATIONAL CONFERENCE ON COMPUTERS AND ARTIFICIAL INTELLIGENCE TECHNOLOGIES, CAIT, 2022, : 9 - 15
  • [3] Output regeneration defense against membership inference attacks for protecting data privacy
    Ding, Yong
    Huang, Peixiong
    Liang, Hai
    Yuan, Fang
    Wang, Huiyong
    INTERNATIONAL JOURNAL OF WEB INFORMATION SYSTEMS, 2023, : 61 - 79
  • [4] When Does Data Augmentation Help With Membership Inference Attacks?
    Kaya, Yigitcan
    Dumitras, Tudor
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [5] EAR: An Enhanced Adversarial Regularization Approach against Membership Inference Attacks
    Hu, Hongsheng
    Salcic, Zoran
    Dobbie, Gillian
    Chen, Yi
    Zhang, Xuyun
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] Comparing Local and Central Differential Privacy Using Membership Inference Attacks
    Bernau, Daniel
    Robl, Jonas
    Grassal, Philip W.
    Schneider, Steffen
    Kerschbaum, Florian
    DATA AND APPLICATIONS SECURITY AND PRIVACY XXXV, 2021, 12840 : 22 - 42
  • [7] Synthetic data for enhanced privacy: A VAE-GAN approach against membership inference attacks
    Yan, Jian'en
    Huang, Haihui
    Yang, Kairan
    Xu, Haiyan
    Li, Yanling
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [8] Membership inference attacks against synthetic health data
    Zhang, Ziqi
    Yan, Chao
    Malin, Bradley A.
    JOURNAL OF BIOMEDICAL INFORMATICS, 2022, 125
  • [9] Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data
    Chen, Junjie
    Wang, Wendy Hui
    Shi, Xinghua
    PACIFIC SYMPOSIUM ON BICOMPUTING 2021, 2021, : 26 - 37
  • [10] Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability
    Truex, Stacey
    Liu, Ling
    Gursoy, Mehmet Emre
    Wei, Wenqi
    Yu, Lei
    2019 FIRST IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2019), 2019, : 82 - 91