Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks

被引:4
|
作者
Ben Hamida, Sana [1 ,2 ,3 ]
Mrabet, Hichem [3 ,4 ]
Chaieb, Faten [5 ]
Jemai, Abderrazak [4 ,6 ]
机构
[1] Gen Directorate Technol Studies, Higher Inst Technol Studies Gabes, STIC, Rades 2098, Tunisia
[2] Gabes Univ, Natl Engn Sch Gabes, Res Team Intelligent Machines, Gabes 6072, Tunisia
[3] Univ Tunis El Manar, FST, Tunis 2092, Tunisia
[4] Carthage Univ, Tunisia Polytech Sch, SERCOM Lab, La Marsa 2078, Tunisia
[5] Paris Pantheon Assas Univ, Efrei Res Lab, Paris, France
[6] Ctr Urbain Nord, INSAT, BP 676, Tunis 1080, Tunisia
关键词
Machine learning; Privacy; Membership inference attack; Dropout; L2; regularization; Differential privacy; Data augmentation; Conventional neural network (CNN);
D O I
10.1007/s11042-023-17394-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) has revolutionized various industries, but concerns about privacy and security have emerged as significant challenges. Membership inference attacks (MIAs) pose a serious threat by attempting to determine whenever a specific data record was used to train a ML model. In this study, we evaluate three defense strategies against MIAs: data augmentation (DA), dropout with L2 regularization, and differential privacy (DP). Through experiments, we assess the effectiveness of these techniques in mitigating the success of MIAs while maintaining acceptable model accuracy. Our findings demonstrate that DA not only improves model accuracy but also enhances privacy protection. The dropout and L2 regularization approach effectively reduces the impact of MIAs without compromising accuracy. However, adopting DP introduces a trade-off, as it limits MIA influence but affects model accuracy. Our DA defense strategy, for instance, show promising results, with privacy improvements of 12.97%, 15.82%, and 10.28% for the MNIST, CIFAR-10, and CIFAR-100 datasets, respectively. These insights contribute to the growing field of privacy protection in ML and highlight the significance of safeguarding sensitive data. Further research is needed to advance privacy-preserving techniques and address the evolving landscape of ML security.
引用
收藏
页码:44455 / 44484
页数:30
相关论文
共 50 条
  • [31] Group analysis of fMRI data using L1 and L2 regularization
    Overholser, Rosanna
    Xu, Ronghui
    STATISTICS AND ITS INTERFACE, 2015, 8 (03) : 379 - 390
  • [32] Combined l2 data and gradient fitting in conjunction with l1 regularization
    Didas, Stephan
    Setzer, Simon
    Steidl, Gabriele
    ADVANCES IN COMPUTATIONAL MATHEMATICS, 2009, 30 (01) : 79 - 99
  • [33] Privacy Preservation of Social Network Users Against Attribute Inference Attacks via Malicious Data Mining
    Reza, Khondker Jahid
    Islam, Md Zahidul
    Estivill-Castro, Vladimir
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON INFORMATION SYSTEMS SECURITY AND PRIVACY (ICISSP), 2019, : 412 - 420
  • [34] On l2 data fitting and modified nonconvex nonsmooth regularization for image recovery
    Xiao, Jin
    Yang, Yu-Fei
    Yuan, Xiao
    JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, 2013, 15 (02) : 264 - 279
  • [35] Robust Estimation Method against Poisoning Attacks for Key-Value Data with Local Differential Privacy
    Horigome, Hikaru
    Kikuchi, Hiroaki
    Fujita, Masahiro
    Yu, Chia-Mu
    APPLIED SCIENCES-BASEL, 2024, 14 (14):
  • [36] Local Differential Privacy Protocol for Making Key-Value Data Robust Against Poisoning Attacks
    Horigome, Hikaru
    Kikuchi, Hiroaki
    Yu, Chia-Mu
    MODELING DECISIONS FOR ARTIFICIAL INTELLIGENCE, MDAI 2023, 2023, 13890 : 241 - 252
  • [37] Compressive Recovery Defense: Defending Neural Networks Against l2, l∞, and l0 Norm Attacks
    Dhaliwal, Jasjeet
    Hambrook, Kyle
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [38] 3-D inversion of magnetic data based on the L1–L2 norm regularization
    Mitsuru Utsugi
    Earth, Planets and Space, 71
  • [39] Displacement Data Imputation in Urban Internet of Things System Based on Tucker Decomposition With L2 Regularization
    Li, Linchao
    Lin, Xiang
    Liu, Hanlin
    Lu, Wenqi
    Zhou, Baoding
    Zhu, Jiasong
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15) : 13315 - 13326
  • [40] Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
    Noorbakhsh, Sayedeh Leila
    Zhang, Binghui
    Hong, Yuan
    Wang, Binghui
    PROCEEDINGS OF THE 33RD USENIX SECURITY SYMPOSIUM, SECURITY 2024, 2024, : 2405 - 2422