Group Pruning with Group Sparse Regularization for Deep Neural Network Compression

被引:0
|
作者
Wu, Chenglu [1 ]
Pang, Wei [1 ]
Liu, Hao [1 ]
Lu, Shengli [1 ]
机构
[1] Southeast Univ, Natl ASIC Syst Engn Res Ctr, Nanjing, Peoples R China
关键词
deep learning; neural network pruning; group sparsity; network compression;
D O I
10.1109/siprocess.2019.8868650
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Network pruning is important for the deployment of deep neural network on hardware platforms. However, most pruning methods focus on coarse-grained pruning with high precision loss. In addition, many fine-grained pruning methods focus the pruning on the fully connected layer rather than the convolution layer. The group pruning technique is proposed, focusing only on CONV-layer that enables to keep the weight reduction rate consistent within each weight group. This helps solve some inefficiency problems including internal buffer misalignment and load imbalance after fine-grained pruning. In the pre-training, the strategy of group sparse regularization (GSR) and standardizing the weight distribution is added to alleviate the loss of precision under high sparsity. Finally, we used the MNIST and CIFAR-10 data sets to test the LeNet and VGG-16 for experiments. The reduction rates of convolution layer are 87.5% and 62.5% respectively in the range of 0.14% accuracy loss.
引用
收藏
页码:325 / 329
页数:5
相关论文
共 50 条
  • [1] Filter Pruning using Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks
    Mitsuno, Kakeru
    Kurita, Takio
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1089 - 1095
  • [2] Group sparse regularization for deep neural networks
    Scardapane, Simone
    Comminiello, Danilo
    Hussain, Amir
    Uncini, Aurelio
    NEUROCOMPUTING, 2017, 241 : 81 - 89
  • [3] Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks
    Mitsuno, Kakeru
    Miyao, Junichi
    Kurita, Takio
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [4] Automated Pruning for Deep Neural Network Compression
    Manessi, Franco
    Rozza, Alessandro
    Bianco, Simone
    Napoletano, Paolo
    Schettini, Raimondo
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 657 - 664
  • [5] Group Fisher Pruning for Practical Network Compression
    Liu, Liyang
    Zhang, Shilong
    Kuang, Zhanghui
    Zhou, Aojun
    Xue, Jing-Hao
    Wang, Xinjiang
    Chen, Yimin
    Yang, Wenming
    Liao, Qingmin
    Zhang, Wayne
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [6] Pruning-aware Sparse Regularization for Network Pruning
    Nan-Fei Jiang
    Xu Zhao
    Chao-Yang Zhao
    Yong-Qi An
    Ming Tang
    Jin-Qiao Wang
    Machine Intelligence Research, 2023, 20 : 109 - 120
  • [7] Pruning-aware Sparse Regularization for Network Pruning
    Jiang, Nan-Fei
    Zhao, Xu
    Zhao, Chao-Yang
    An, Yong-Qi
    Tang, Ming
    Wang, Jin-Qiao
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (01) : 109 - 120
  • [8] Pruning-aware Sparse Regularization for Network Pruning
    Nan-Fei Jiang
    Xu Zhao
    Chao-Yang Zhao
    Yong-Qi An
    Ming Tang
    Jin-Qiao Wang
    Machine Intelligence Research, 2023, 20 (01) : 109 - 120
  • [9] Causal Network Inference Via Group Sparse Regularization
    Bolstad, Andrew
    Van Veen, Barry D.
    Nowak, Robert
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2011, 59 (06) : 2628 - 2641
  • [10] Group variable selection via group sparse neural network
    Zhang, Xin
    Zhao, Junlong
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2024, 192