Group Pruning with Group Sparse Regularization for Deep Neural Network Compression

被引:0
|
作者
Wu, Chenglu [1 ]
Pang, Wei [1 ]
Liu, Hao [1 ]
Lu, Shengli [1 ]
机构
[1] Southeast Univ, Natl ASIC Syst Engn Res Ctr, Nanjing, Peoples R China
关键词
deep learning; neural network pruning; group sparsity; network compression;
D O I
10.1109/siprocess.2019.8868650
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Network pruning is important for the deployment of deep neural network on hardware platforms. However, most pruning methods focus on coarse-grained pruning with high precision loss. In addition, many fine-grained pruning methods focus the pruning on the fully connected layer rather than the convolution layer. The group pruning technique is proposed, focusing only on CONV-layer that enables to keep the weight reduction rate consistent within each weight group. This helps solve some inefficiency problems including internal buffer misalignment and load imbalance after fine-grained pruning. In the pre-training, the strategy of group sparse regularization (GSR) and standardizing the weight distribution is added to alleviate the loss of precision under high sparsity. Finally, we used the MNIST and CIFAR-10 data sets to test the LeNet and VGG-16 for experiments. The reduction rates of convolution layer are 87.5% and 62.5% respectively in the range of 0.14% accuracy loss.
引用
收藏
页码:325 / 329
页数:5
相关论文
共 50 条
  • [31] Pruning by Training: A Novel Deep Neural Network Compression Framework for Image Processing
    Tian, Guanzhong
    Chen, Jun
    Zeng, Xianfang
    Liu, Yong
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 344 - 348
  • [32] A "Network Pruning Network" Approach to Deep Model Compression
    Verma, Vinay Kumar
    Singh, Pravendra
    Namboodiri, Vinay P.
    Rai, Piyush
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 2998 - 3007
  • [33] Deep side group sparse coding network for image denoising
    Yin, Haitao
    Wang, Tianyou
    IET IMAGE PROCESSING, 2023, 17 (01) : 1 - 11
  • [34] Quantisation and Pruning for Neural Network Compression and Regularisation
    Paupamah, Kimessha
    James, Steven
    Klein, Richard
    2020 INTERNATIONAL SAUPEC/ROBMECH/PRASA CONFERENCE, 2020, : 295 - 300
  • [35] ON THE ROLE OF STRUCTURED PRUNING FOR NEURAL NETWORK COMPRESSION
    Bragagnolo, Andrea
    Tartaglione, Enzo
    Fiandrotti, Attilio
    Grangetto, Marco
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3527 - 3531
  • [36] BP Neural Network Feature Selection Based on Group Lasso Regularization
    Liu, Tiqian
    Xiao, Jiang-Wen
    Huang, Zhengyi
    Kong, Erdan
    Liang, Yuntao
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 2786 - 2790
  • [37] Neural Network Compression and Acceleration by Federated Pruning
    Pei, Songwen
    Wu, Yusheng
    Qiu, Meikang
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT II, 2020, 12453 : 173 - 183
  • [38] A CNN channel pruning low-bit framework using weight quantization with sparse group lasso regularization
    Long, Xin
    Zeng, Xiangrong
    Liu, Yan
    Xiao, Huaxin
    Zhang, Maojun
    Ben, Zongcheng
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2020, 39 (01) : 221 - 232
  • [39] Speech bottleneck feature extraction method based on overlapping group lasso sparse deep neural network
    Luo, Yuan
    Liu, Yu
    Zhang, Yi
    Yue, Congcong
    SPEECH COMMUNICATION, 2018, 99 : 56 - 61
  • [40] Deep Neural Networks Pruning via the Structured Perspective Regularization
    Cacciola, Matteo
    Frangioni, Antonio
    Li, Xinlin
    Lodi, Andrea
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2023, 5 (04): : 1051 - 1077