Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks

被引:4
|
作者
Ben Hamida, Sana [1 ,2 ,3 ]
Mrabet, Hichem [3 ,4 ]
Chaieb, Faten [5 ]
Jemai, Abderrazak [4 ,6 ]
机构
[1] Gen Directorate Technol Studies, Higher Inst Technol Studies Gabes, STIC, Rades 2098, Tunisia
[2] Gabes Univ, Natl Engn Sch Gabes, Res Team Intelligent Machines, Gabes 6072, Tunisia
[3] Univ Tunis El Manar, FST, Tunis 2092, Tunisia
[4] Carthage Univ, Tunisia Polytech Sch, SERCOM Lab, La Marsa 2078, Tunisia
[5] Paris Pantheon Assas Univ, Efrei Res Lab, Paris, France
[6] Ctr Urbain Nord, INSAT, BP 676, Tunis 1080, Tunisia
关键词
Machine learning; Privacy; Membership inference attack; Dropout; L2; regularization; Differential privacy; Data augmentation; Conventional neural network (CNN);
D O I
10.1007/s11042-023-17394-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) has revolutionized various industries, but concerns about privacy and security have emerged as significant challenges. Membership inference attacks (MIAs) pose a serious threat by attempting to determine whenever a specific data record was used to train a ML model. In this study, we evaluate three defense strategies against MIAs: data augmentation (DA), dropout with L2 regularization, and differential privacy (DP). Through experiments, we assess the effectiveness of these techniques in mitigating the success of MIAs while maintaining acceptable model accuracy. Our findings demonstrate that DA not only improves model accuracy but also enhances privacy protection. The dropout and L2 regularization approach effectively reduces the impact of MIAs without compromising accuracy. However, adopting DP introduces a trade-off, as it limits MIA influence but affects model accuracy. Our DA defense strategy, for instance, show promising results, with privacy improvements of 12.97%, 15.82%, and 10.28% for the MNIST, CIFAR-10, and CIFAR-100 datasets, respectively. These insights contribute to the growing field of privacy protection in ML and highlight the significance of safeguarding sensitive data. Further research is needed to advance privacy-preserving techniques and address the evolving landscape of ML security.
引用
收藏
页码:44455 / 44484
页数:30
相关论文
共 50 条
  • [21] Robust scientific text classification using prompt tuning based on data augmentation with L2 regularization
    Shi, Shijun
    Hu, Kai
    Xie, Jie
    Guo, Ya
    Wu, Huayi
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (01)
  • [22] The influence of dropout and residual connection against membership inference attacks on transformer model: a neuro generative disease case study
    Sameh Ben Hamida
    Sana Ben Hamida
    Ahmed Snoun
    Olfa Jemai
    Abderrazek Jemai
    Multimedia Tools and Applications, 2024, 83 : 16231 - 16253
  • [23] GradDiff: Gradient-based membership inference attacks against federated distillation with differential comparison
    Wang, Xiaodong
    Wu, Longfei
    Guan, Zhitao
    INFORMATION SCIENCES, 2024, 658
  • [24] The influence of dropout and residual connection against membership inference attacks on transformer model: a neuro generative disease case study
    Ben Hamida, Sameh
    Ben Hamida, Sana
    Snoun, Ahmed
    Jemai, Olfa
    Jemai, Abderrazek
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (06) : 16231 - 16253
  • [25] Longitudinal attacks against iterative data collection with local differential privacy
    Gursoy, Mehmet Emre
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2024, 32 (01) : 198 - 218
  • [26] An Analysis of the Regularization between L2 and Dropout in Single Hidden Layer Neural Network
    Phaisangittisagul, Ekachai
    2016 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS, MODELLING AND SIMULATION (ISMS), 2016, : 174 - 179
  • [27] Use the Spear as a Shield: An Adversarial Example Based Privacy-Preserving Technique Against Membership Inference Attacks
    Xue, Mingfu
    Yuan, Chengxiang
    He, Can
    Wu, Yinghao
    Wu, Zhiyu
    Zhang, Yushu
    Liu, Zhe
    Liu, Weiqiang
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (01) : 153 - 169
  • [28] Synthetic Is All You Need: Removing the Auxiliary Data Assumption for Membership Inference Attacks Against Synthetic Data
    Guepin, Florent
    Meeus, Matthieu
    Cretu, Ana-Maria
    de Montjoye, Yves-Alexandre
    COMPUTER SECURITY. ESORICS 2023 INTERNATIONAL WORKSHOPS, PT I, 2024, 14398 : 182 - 198
  • [29] LDPGuard: Defenses Against Data Poisoning Attacks to Local Differential Privacy Protocols
    Huang, Kai
    Ouyang, Gaoya
    Ye, Qingqing
    Hu, Haibo
    Zheng, Bolong
    Zhao, Xi
    Zhang, Ruiyuan
    Zhou, Xiaofang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (07) : 3195 - 3209
  • [30] Managing Your Private and Public Data: Bringing Down Inference Attacks Against Your Privacy
    Salamatian, Salman
    Zhang, Amy
    Calmon, Flavio du Pin
    Bhamidipati, Sandilya
    Fawaz, Nadia
    Kveton, Branislav
    Oliveira, Pedro
    Taft, Nina
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2015, 9 (07) : 1240 - 1255