Membership Inference Attacks and Defenses in Neural Network Pruning

被引:0
|
作者
Yuan, Xiaoyong [1 ]
Zhang, Lan [1 ]
机构
[1] Michigan Technol Univ, Houghton, MI 49931 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Neural network pruning has been an essential technique to reduce the computation and memory requirements for using deep neural networks for resource-constrained devices. Most existing research focuses primarily on balancing the sparsity and accuracy of a pruned neural network by strategically removing insignificant parameters and retraining the pruned model. Such efforts on reusing training samples pose serious privacy risks due to increased memorization, which, however, has not been investigated yet. In this paper, we conduct the first analysis of privacy risks in neural network pruning. Specifically, we investigate the impacts of neural network pruning on training data privacy, i.e., membership inference attacks. We first explore the impact of neural network pruning on prediction divergence, where the pruning process disproportionately affects the pruned model's behavior for members and non-members. Meanwhile, the influence of divergence even varies among different classes in a fine-grained manner. Enlightened by such divergence, we proposed a self-attention membership inference attack against the pruned neural networks. Extensive experiments are conducted to rigorously evaluate the privacy impacts of different pruning approaches, sparsity levels, and adversary knowledge. The proposed attack shows the higher attack performance on the pruned models when compared with eight existing membership inference attacks. In addition, we propose a new defense mechanism to protect the pruning process by mitigating the prediction divergence based on KL-divergence distance, whose effectiveness has been experimentally demonstrated to effectively mitigate the privacy risks while maintaining the sparsity and accuracy of the pruned models.
引用
收藏
页码:4561 / 4578
页数:18
相关论文
共 50 条
  • [21] Do Backdoors Assist Membership Inference Attacks?
    Goto, Yumeki
    Ashizawa, Nami
    Shibahara, Toshiki
    Yanai, Naoto
    SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, PT II, SECURECOMM 2023, 2025, 568 : 251 - 265
  • [22] Membership Inference Attacks on Machine Learning: A Survey
    Hu, Hongsheng
    Salcic, Zoran
    Sun, Lichao
    Dobbie, Gillian
    Yu, Philip S.
    Zhang, Xuyun
    ACM COMPUTING SURVEYS, 2022, 54 (11S)
  • [23] Membership Inference Attacks Against the Graph Classification
    Yang, Junze
    Li, Hongwei
    Fan, Wenshu
    Zhang, Xilin
    Hao, Meng
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6729 - 6734
  • [24] Membership Inference Attacks are Easier on Difficult Problems
    Shafran, Avital
    Peleg, Shmuel
    Hoshen, Yedid
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14800 - 14809
  • [25] Detection of Membership Inference Attacks on GAN Models
    Ekramifard, Ala
    Amintoosi, Haleh
    Seno, Seyed Amin Hosseini
    ISECURE-ISC INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2025, 17 (01): : 43 - 57
  • [26] Label-Only Membership Inference Attacks
    Choquette-Choo, Christopher A.
    Tramer, Florian
    Carlini, Nicholas
    Papernot, Nicolas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [27] Membership Inference Attacks and Generalization: A Causal Perspective
    Baluta, Teodora
    Shen, Shiqi
    Hitarth, S.
    Tople, Shruti
    Saxena, Prateek
    PROCEEDINGS OF THE 2022 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2022, 2022, : 249 - 262
  • [28] Membership Inference Attacks against Diffusion Models
    Matsumoto, Tomoya
    Miura, Takayuki
    Yanai, Naoto
    2023 IEEE SECURITY AND PRIVACY WORKSHOPS, SPW, 2023, : 77 - 83
  • [29] Membership Inference Attacks From First Principles
    Carlini, Nicholas
    Chien, Steve
    Nasr, Milad
    Song, Shuang
    Terzis, Andreas
    Tramer, Florian
    43RD IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2022), 2022, : 1897 - 1914
  • [30] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    COMPUTERS & SECURITY, 2024, 136