Group Pruning with Group Sparse Regularization for Deep Neural Network Compression

被引:0
|
作者
Wu, Chenglu [1 ]
Pang, Wei [1 ]
Liu, Hao [1 ]
Lu, Shengli [1 ]
机构
[1] Southeast Univ, Natl ASIC Syst Engn Res Ctr, Nanjing, Peoples R China
关键词
deep learning; neural network pruning; group sparsity; network compression;
D O I
10.1109/siprocess.2019.8868650
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Network pruning is important for the deployment of deep neural network on hardware platforms. However, most pruning methods focus on coarse-grained pruning with high precision loss. In addition, many fine-grained pruning methods focus the pruning on the fully connected layer rather than the convolution layer. The group pruning technique is proposed, focusing only on CONV-layer that enables to keep the weight reduction rate consistent within each weight group. This helps solve some inefficiency problems including internal buffer misalignment and load imbalance after fine-grained pruning. In the pre-training, the strategy of group sparse regularization (GSR) and standardizing the weight distribution is added to alleviate the loss of precision under high sparsity. Finally, we used the MNIST and CIFAR-10 data sets to test the LeNet and VGG-16 for experiments. The reduction rates of convolution layer are 87.5% and 62.5% respectively in the range of 0.14% accuracy loss.
引用
收藏
页码:325 / 329
页数:5
相关论文
共 50 条
  • [41] HFPQ: deep neural network compression by hardware-friendly pruning-quantization
    YingBo Fan
    Wei Pang
    ShengLi Lu
    Applied Intelligence, 2021, 51 : 7016 - 7028
  • [42] HFPQ: deep neural network compression by hardware-friendly pruning-quantization
    Fan, YingBo
    Pang, Wei
    Lu, ShengLi
    APPLIED INTELLIGENCE, 2021, 51 (10) : 7016 - 7028
  • [43] Group Sparse β-Model for Network
    Wang, Zhonghan
    Zhao, Junlong
    JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 2024,
  • [44] Sparse sampling photoacoustic reconstruction with a graph regularization group sparse dictionary
    Wang, Xiaoxue
    Zhang, Zhimin
    Shan, Shihao
    Wildgruber, Moritz
    Liu, Nian
    Cheng, Qiyuan
    Ma, Xiaopeng
    APPLIED OPTICS, 2024, 63 (20) : 5292 - 5302
  • [45] Pruning by explaining: A novel criterion for deep neural network pruning
    Yeom, Seul-Ki
    Seegerer, Philipp
    Lapuschkin, Sebastian
    Binder, Alexander
    Wiedemann, Simon
    Mueller, Klaus-Robert
    Samek, Wojciech
    PATTERN RECOGNITION, 2021, 115
  • [46] AUTOMATIC NODE SELECTION FOR DEEP NEURAL NETWORKS USING GROUP LASSO REGULARIZATION
    Ochiai, Tsubasa
    Matsuda, Shigeki
    Watanabe, Hideyuki
    Katagiri, Shigeru
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 5485 - 5489
  • [47] Group L1/2 Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks
    Alemu, Habtamu Zegeye
    Zhao, Junhong
    Li, Feng
    Wu, Wei
    IEEE ACCESS, 2019, 7 : 9540 - 9557
  • [48] Broad Multitask Learning System With Group Sparse Regularization
    Huang, Jintao
    Chen, Chuangquan
    Vong, Chi-Man
    Cheung, Yiu-Ming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [49] ECG Baseline Estimation and Denoising With Group Sparse Regularization
    Shi, Hao
    Liu, Ruixia
    Chen, Changfang
    Shu, Minglei
    Wang, Yinglong
    IEEE ACCESS, 2021, 9 : 23595 - 23607
  • [50] First-Order Sparse TSK Nonstationary Fuzzy Neural Network Based on the Mean Shift Algorithm and the Group Lasso Regularization
    Zhang, Bingjie
    Wang, Jian
    Gong, Xiaoling
    Shi, Zhanglei
    Zhang, Chao
    Zhang, Kai
    El-Alfy, El-Sayed M.
    Ablameyko, Sergey V.
    MATHEMATICS, 2024, 12 (01)