Learning compact ConvNets through filter pruning based on the saliency of a feature map

被引:2
|
作者
Liu, Zhoufeng [1 ]
Liu, Xiaohui [1 ]
Li, Chunlei [1 ]
Ding, Shumin [2 ]
Liao, Liang [1 ]
机构
[1] Zhongyuan Univ Technol, Sch Elect & Informat Engn, Zhengzhou, Peoples R China
[2] Zhongyuan Univ Technol, Sch Energy & Environm, Zhengzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
53;
D O I
10.1049/ipr2.12338
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the performance increase of convolutional neural network (CNN), the disadvantages of CNN's high storage and high power consumption are followed. Among the methods mentioned in various literature, filter pruning is a crucial method for constructing lightweight networks. However, the current filter pruning method is still challenged by complicated processes and training inefficiency. This paper proposes an effective filter pruning method, which uses the saliency of the feature map (SFM), i.e. information entropy, as a theoretical guide for whether the filter is essential. The pruning principle use here is that the filter with a weak saliency feature map in the early stage will not significantly improve the final accuracy. Thus, one can efficiently prune the non-salient feature map with a smaller information entropy and the corresponding filter. Besides, an over-parameterized convolution method is employed to improve the pruned model's accuracy without increasing parameter at inference time. Experimental results show that without introducing any additional constraints, the effectiveness of this method in FLOPs and parameters reduction with similar accuracy has advanced the state-of-the-art. For example, on CIFAR-10, the pruned VGG-16 achieves only a small loss of 0.39% in Top-1 accuracy with a factor of 83.3% parameters, and 66.7% FLOPs reductions. On ImageNet-100, the pruned ResNet-50 achieves only a small accuracy degradation of 0.76% in Top-1 accuracy with a factor of 61.19% parameters, and 62.98% FLOPs reductions.
引用
收藏
页码:123 / 133
页数:11
相关论文
共 50 条
  • [41] ChoiceNet: CNN learning through choice of multiple feature map representations
    Rayhan, Farshid
    Galata, Aphrodite
    Cootes, Tim F.
    PATTERN ANALYSIS AND APPLICATIONS, 2021, 24 (04) : 1757 - 1767
  • [42] Enhancing semi-supervised contrastive learning through saliency map for diabetic retinopathy grading
    Zhang, Jiacheng
    Jin, Rong
    Liu, Wenqiang
    IET COMPUTER VISION, 2024, 18 (08) : 1127 - 1137
  • [43] Deep neural network compression through interpretability-based filter pruning
    Yao, Kaixuan
    Cao, Feilong
    Leung, Yee
    Liang, Jiye
    PATTERN RECOGNITION, 2021, 119
  • [44] Thermodynamics modeling of deep learning systems for a temperature based filter pruning technique
    Lapenna, M.
    Faglioni, F.
    Fioresi, R.
    FRONTIERS IN PHYSICS, 2023, 11
  • [45] Enhancing CNN efficiency through mutual information-based filter pruning
    Lu, Jingqi
    Wang, Ruiqing
    Zuo, Guanpeng
    Zhang, Wu
    Jin, Xiu
    Rao, Yuan
    DIGITAL SIGNAL PROCESSING, 2024, 151
  • [46] A multi-agent reinforcement learning based approach for automatic filter pruning
    Li, Zhemin
    Zuo, Xiaojing
    Song, Yiping
    Liang, Dong
    Xie, Zheng
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [47] Fast-learning adaptive-subspace self-organizing map: An application to saliency-based invariant image feature construction
    Zheng, Huicheng
    Lefebvre, Gregoire
    Laurent, Christophe
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (05): : 746 - 757
  • [48] Compact feature hashing for machine learning based malware detection
    Moon, Damin
    Lee, JaeKoo
    Yoon, MyungKeun
    ICT EXPRESS, 2022, 8 (01): : 124 - 129
  • [49] Convolutional neural network pruning based on multi-objective feature map selection for image classification
    Jiang, Pengcheng
    Xue, Yu
    Neri, Ferrante
    APPLIED SOFT COMPUTING, 2023, 139
  • [50] Infrared Small Target Detection Through Multiple Feature Analysis Based on Visual Saliency
    Chen, Yuwen
    Song, Bin
    Du, Xiaojiang
    Guizani, Mohsen
    IEEE ACCESS, 2019, 7 : 38996 - 39004