Learning compact ConvNets through filter pruning based on the saliency of a feature map

被引:2
|
作者
Liu, Zhoufeng [1 ]
Liu, Xiaohui [1 ]
Li, Chunlei [1 ]
Ding, Shumin [2 ]
Liao, Liang [1 ]
机构
[1] Zhongyuan Univ Technol, Sch Elect & Informat Engn, Zhengzhou, Peoples R China
[2] Zhongyuan Univ Technol, Sch Energy & Environm, Zhengzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
53;
D O I
10.1049/ipr2.12338
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the performance increase of convolutional neural network (CNN), the disadvantages of CNN's high storage and high power consumption are followed. Among the methods mentioned in various literature, filter pruning is a crucial method for constructing lightweight networks. However, the current filter pruning method is still challenged by complicated processes and training inefficiency. This paper proposes an effective filter pruning method, which uses the saliency of the feature map (SFM), i.e. information entropy, as a theoretical guide for whether the filter is essential. The pruning principle use here is that the filter with a weak saliency feature map in the early stage will not significantly improve the final accuracy. Thus, one can efficiently prune the non-salient feature map with a smaller information entropy and the corresponding filter. Besides, an over-parameterized convolution method is employed to improve the pruned model's accuracy without increasing parameter at inference time. Experimental results show that without introducing any additional constraints, the effectiveness of this method in FLOPs and parameters reduction with similar accuracy has advanced the state-of-the-art. For example, on CIFAR-10, the pruned VGG-16 achieves only a small loss of 0.39% in Top-1 accuracy with a factor of 83.3% parameters, and 66.7% FLOPs reductions. On ImageNet-100, the pruned ResNet-50 achieves only a small accuracy degradation of 0.76% in Top-1 accuracy with a factor of 61.19% parameters, and 62.98% FLOPs reductions.
引用
收藏
页码:123 / 133
页数:11
相关论文
共 50 条
  • [1] Toward Compact ConvNets via Structure-Sparsity Regularized Filter Pruning
    Lin, Shaohui
    Ji, Rongrong
    Li, Yuchao
    Deng, Cheng
    Li, Xuelong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (02) : 574 - 588
  • [2] Filter pruning via feature map clustering
    Li, Wei
    He, Yongxing
    Zhang, Xiaoyu
    Tang, Yongchuan
    INTELLIGENT DATA ANALYSIS, 2023, 27 (04) : 911 - 933
  • [3] Filter Pruning via Measuring Feature Map Information
    Shao, Linsong
    Zuo, Haorui
    Zhang, Jianlin
    Xu, Zhiyong
    Yao, Jinzhen
    Wang, Zhixing
    Li, Hong
    SENSORS, 2021, 21 (19)
  • [4] Filter pruning-based two-step feature map reconstruction
    Liang, Yongsheng
    Liu, Wei
    Yi, Shuangyan
    Yang, Huoxiang
    He, Zhenyu
    SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (07) : 1555 - 1563
  • [5] Filter pruning-based two-step feature map reconstruction
    Yongsheng Liang
    Wei Liu
    Shuangyan Yi
    Huoxiang Yang
    Zhenyu He
    Signal, Image and Video Processing, 2021, 15 : 1555 - 1563
  • [6] FUSION OF SALIENCY MAP AND DEEP FEATURE-BASED CORRELATION FILTER FOR ENHANCING TRACKING PERFORMANCES
    Lee, Hyemin
    Kim, Daijin
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2091 - 2095
  • [7] Filter Pruning Using Expectation Value of Feature Map's Summation
    Wu, Hai
    Liu, Chuanbin
    Lin, Fanchao
    Liu, Yizhi
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2021, PT IV, 2021, 13016 : 748 - 755
  • [8] HRank: Filter Pruning using High-Rank Feature Map
    Lin, Mingbao
    Ji, Rongrong
    Wang, Yan
    Zhang, Yichen
    Zhang, Baochang
    Tian, Yonghong
    Shao, Ling
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1526 - 1535
  • [9] A Pruning Method Based on Feature Map Similarity Score
    Cui, Jihua
    Wang, Zhenbang
    Yang, Ziheng
    Guan, Xin
    BIG DATA AND COGNITIVE COMPUTING, 2023, 7 (04)
  • [10] AFMPM: adaptive feature map pruning method based on feature distillation
    Guo, Yufeng
    Zhang, Weiwei
    Wang, Junhuang
    Ji, Ming
    Zhen, Chenghui
    Guo, Zhengzheng
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (02) : 573 - 588