Membership Inference Attacks and Defenses in Neural Network Pruning

被引:0
|
作者
Yuan, Xiaoyong [1 ]
Zhang, Lan [1 ]
机构
[1] Michigan Technol Univ, Houghton, MI 49931 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Neural network pruning has been an essential technique to reduce the computation and memory requirements for using deep neural networks for resource-constrained devices. Most existing research focuses primarily on balancing the sparsity and accuracy of a pruned neural network by strategically removing insignificant parameters and retraining the pruned model. Such efforts on reusing training samples pose serious privacy risks due to increased memorization, which, however, has not been investigated yet. In this paper, we conduct the first analysis of privacy risks in neural network pruning. Specifically, we investigate the impacts of neural network pruning on training data privacy, i.e., membership inference attacks. We first explore the impact of neural network pruning on prediction divergence, where the pruning process disproportionately affects the pruned model's behavior for members and non-members. Meanwhile, the influence of divergence even varies among different classes in a fine-grained manner. Enlightened by such divergence, we proposed a self-attention membership inference attack against the pruned neural networks. Extensive experiments are conducted to rigorously evaluate the privacy impacts of different pruning approaches, sparsity levels, and adversary knowledge. The proposed attack shows the higher attack performance on the pruned models when compared with eight existing membership inference attacks. In addition, we propose a new defense mechanism to protect the pruning process by mitigating the prediction divergence based on KL-divergence distance, whose effectiveness has been experimentally demonstrated to effectively mitigate the privacy risks while maintaining the sparsity and accuracy of the pruned models.
引用
收藏
页码:4561 / 4578
页数:18
相关论文
共 50 条
  • [1] Defenses to Membership Inference Attacks: A Survey
    Hu, Li
    Yan, Anli
    Yan, Hongyang
    Li, Jin
    Huang, Teng
    Zhang, Yingying
    Dong, Changyu
    Yang, Chunsheng
    ACM COMPUTING SURVEYS, 2024, 56 (04)
  • [2] Membership Inference Attacks and Defenses in Classification Models
    Li, Jiacheng
    Li, Ninghui
    Ribeiro, Bruno
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY (CODASPY '21), 2021, : 5 - 16
  • [3] Membership Inference Attacks and Defenses in Federated Learning: A Survey
    Bai, Li
    Hu, Haibo
    Ye, Qingqing
    Li, Haoyang
    Wang, Leixia
    Xu, Jianliang
    ACM COMPUTING SURVEYS, 2025, 57 (04)
  • [4] Membership Inference Attacks Against Robust Graph Neural Network
    Liu, Zhengyang
    Zhang, Xiaoyu
    Chen, Chenyang
    Lin, Shen
    Li, Jingjin
    CYBERSPACE SAFETY AND SECURITY, CSS 2022, 2022, 13547 : 259 - 273
  • [5] Attribute-Based Membership Inference Attacks and Defenses on GANs
    Sun, Hui
    Zhu, Tianqing
    Li, Jie
    Ji, Shoulin
    Zhou, Wanlei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2376 - 2393
  • [6] A survey on privacy inference attacks and defenses in cloud-based Deep Neural Network
    Zhang, Xiaoyu
    Chen, Chao
    Xie, Yi
    Chen, Xiaofeng
    Zhang, Jun
    Xiang, Yang
    COMPUTER STANDARDS & INTERFACES, 2023, 83
  • [7] Label-Only Membership Inference Attacks and Defenses in Semantic Segmentation Models
    Zhang, Guangsheng
    Liu, Bo
    Zhu, Tianqing
    Ding, Ming
    Zhou, Wanlei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (02) : 1435 - 1449
  • [8] Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks
    Famili, Azadeh
    Lao, Yingjie
    SENSORS, 2023, 23 (18)
  • [9] Exploration of Membership Inference Attack on Convolutional Neural Networks and Its Defenses
    Yao, Yimian
    2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 604 - 610
  • [10] On the Difficulty of Membership Inference Attacks
    Rezaei, Shahbaz
    Liu, Xin
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7888 - 7896