Assessment of data augmentation, dropout with L2 Regularization and differential privacy against membership inference attacks

被引:4
|
作者
Ben Hamida, Sana [1 ,2 ,3 ]
Mrabet, Hichem [3 ,4 ]
Chaieb, Faten [5 ]
Jemai, Abderrazak [4 ,6 ]
机构
[1] Gen Directorate Technol Studies, Higher Inst Technol Studies Gabes, STIC, Rades 2098, Tunisia
[2] Gabes Univ, Natl Engn Sch Gabes, Res Team Intelligent Machines, Gabes 6072, Tunisia
[3] Univ Tunis El Manar, FST, Tunis 2092, Tunisia
[4] Carthage Univ, Tunisia Polytech Sch, SERCOM Lab, La Marsa 2078, Tunisia
[5] Paris Pantheon Assas Univ, Efrei Res Lab, Paris, France
[6] Ctr Urbain Nord, INSAT, BP 676, Tunis 1080, Tunisia
关键词
Machine learning; Privacy; Membership inference attack; Dropout; L2; regularization; Differential privacy; Data augmentation; Conventional neural network (CNN);
D O I
10.1007/s11042-023-17394-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) has revolutionized various industries, but concerns about privacy and security have emerged as significant challenges. Membership inference attacks (MIAs) pose a serious threat by attempting to determine whenever a specific data record was used to train a ML model. In this study, we evaluate three defense strategies against MIAs: data augmentation (DA), dropout with L2 regularization, and differential privacy (DP). Through experiments, we assess the effectiveness of these techniques in mitigating the success of MIAs while maintaining acceptable model accuracy. Our findings demonstrate that DA not only improves model accuracy but also enhances privacy protection. The dropout and L2 regularization approach effectively reduces the impact of MIAs without compromising accuracy. However, adopting DP introduces a trade-off, as it limits MIA influence but affects model accuracy. Our DA defense strategy, for instance, show promising results, with privacy improvements of 12.97%, 15.82%, and 10.28% for the MNIST, CIFAR-10, and CIFAR-100 datasets, respectively. These insights contribute to the growing field of privacy protection in ML and highlight the significance of safeguarding sensitive data. Further research is needed to advance privacy-preserving techniques and address the evolving landscape of ML security.
引用
收藏
页码:44455 / 44484
页数:30
相关论文
共 50 条
  • [41] The Differential Diagnostic Affordances of Interventionist and Interactionist Dynamic Assessment for L2 Argumentative Writing
    Nassaji, Hossein
    Kushki, Ali
    Rahimi, Mohammad
    LANGUAGE AND SOCIOCULTURAL THEORY, 2020, 7 (02) : 151 - 175
  • [42] Detection of Lexical Stress Errors in Non-Native (L2) English with Data Augmentation and Attention
    Korzekwa, Daniel
    Barra-Chicote, Roberto
    Zaporowski, Szymon
    Beringer, Grzegorz
    Lorenzo-Trueba, Jaime
    Serafinowicz, Alicja
    Droppo, Jasha
    Drugman, Thomas
    Kostek, Bozena
    INTERSPEECH 2021, 2021, : 3915 - 3919
  • [43] EDGE-ADAPTIVE l2 REGULARIZATION IMAGE RECONSTRUCTION FROM NON-UNIFORM FOURIER DATA
    Churchill, Victor
    Archibald, Rick
    Gelb, Anne
    INVERSE PROBLEMS AND IMAGING, 2019, 13 (05) : 931 - 958
  • [44] ONLINE LOW-RANK SUBSPACE LEARNING FROM INCOMPLETE DATA USING RANK REVEALING l2/l1 REGULARIZATION
    Giampouras, Paris V.
    Rontogiannis, Athanasios A.
    Koutroumbas, Konstantinos D.
    2016 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2016,
  • [45] Intervention in teachers' differential scoring judgments in assessing L2 writing through communities of assessment practice
    Seker, Meral
    STUDIES IN EDUCATIONAL EVALUATION, 2018, 59 : 209 - 217
  • [46] A primal-dual interior-point framework for using the L1 or L2 norm on the data and regularization terms of inverse problems
    Borsic, A.
    Adler, A.
    INVERSE PROBLEMS, 2012, 28 (09)
  • [47] Finite-time l2 - l∞ filtering for persistent dwell-time switched piecewise-affine systems against deception attacks
    Mei, Zhen
    Fang, Ting
    Shen, Hao
    APPLIED MATHEMATICS AND COMPUTATION, 2022, 427
  • [48] Prediction using step-wise L1, L2 regularization and feature selection for small data sets with large number of features
    Ozgur Demir-Kavuk
    Mayumi Kamada
    Tatsuya Akutsu
    Ernst-Walter Knapp
    BMC Bioinformatics, 12
  • [49] Prediction using step-wise L1, L2 regularization and feature selection for small data sets with large number of features
    Demir-Kavuk, Ozgur
    Kamada, Mayumi
    Akutsu, Tatsuya
    Knapp, Ernst-Walter
    BMC BIOINFORMATICS, 2011, 12
  • [50] Data Rate Assessment on L2–L3 CPU Bus and Bus between CPU and RAM in Modern CPUs
    Komar M.S.
    Automatic Control and Computer Sciences, 2017, 51 (7) : 701 - 708