Filter Pruning by Switching to Neighboring CNNs With Good Attributes

被引:43
|
作者
He, Yang [1 ,2 ]
Liu, Ping [1 ,2 ]
Zhu, Linchao [1 ]
Yang, Yi [3 ]
机构
[1] Univ Technol Sydney, Australian Artificial Intelligence Inst, ReLER Lab, Sydney, NSW 2007, Australia
[2] ASTAR, Ctr Frontier AI Res CFAR, Singapore 138632, Singapore
[3] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310000, Peoples R China
基金
澳大利亚研究理事会;
关键词
Neural networks; Training; Training data; Neurons; Libraries; Graphics processing units; Electronic mail; Filter pruning; meta-attributes; network compression; neural networks; CLASSIFICATION; ACCURACY;
D O I
10.1109/TNNLS.2022.3149332
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Filter pruning is effective to reduce the computational costs of neural networks. Existing methods show that updating the previous pruned filter would enable large model capacity and achieve better performance. However, during the iterative pruning process, even if the network weights are updated to new values, the pruning criterion remains the same. In addition, when evaluating the filter importance, only the magnitude information of the filters is considered. However, in neural networks, filters do not work individually, but they would affect other filters. As a result, the magnitude information of each filter, which merely reflects the information of an individual filter itself, is not enough to judge the filter importance. To solve the above problems, we propose meta-attribute-based filter pruning (MFP). First, to expand the existing magnitude information-based pruning criteria, we introduce a new set of criteria to consider the geometric distance of filters. Additionally, to explicitly assess the current state of the network, we adaptively select the most suitable criteria for pruning via a meta-attribute, a property of the neural network at the current state. Experiments on two image classification benchmarks validate our method. For ResNet-50 on ILSVRC-2012, we could reduce more than 50% FLOPs with only 0.44% top-5 accuracy loss.
引用
收藏
页码:8044 / 8056
页数:13
相关论文
共 50 条
  • [41] ATTRIBUTES OF A GOOD PRACTICING PHYSICIAN
    PRICE, PB
    LOUGHMILLER, GC
    LEWIS, EG
    NELSON, DE
    MURRAY, SL
    TAYLOR, CW
    JOURNAL OF MEDICAL EDUCATION, 1971, 46 (03): : 229 - +
  • [42] A-pruning: a lightweight pineapple flower counting network based on filter pruning
    Yu, Guoyan
    Cai, Ruilin
    Luo, Yingtong
    Hou, Mingxin
    Deng, Ruoling
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (02) : 2047 - 2066
  • [43] Cluster Pruning: An Efficient Filter Pruning Method for Edge AI Vision Applications
    Gamanayake, Chinthaka
    Jayasinghe, Lahiru
    Ng, Benny Kai Kiat
    Yuen, Chau
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (04) : 802 - 816
  • [44] A-pruning: a lightweight pineapple flower counting network based on filter pruning
    Guoyan Yu
    Ruilin Cai
    Yingtong Luo
    Mingxin Hou
    Ruoling Deng
    Complex & Intelligent Systems, 2024, 10 : 2047 - 2066
  • [45] Weight-adaptive channel pruning for CNNs based on closeness-centrality modeling
    Dong, Zhao
    Duan, Yuanzhi
    Zhou, Yue
    Duan, Shukai
    Hu, Xiaofang
    APPLIED INTELLIGENCE, 2024, 54 (01) : 201 - 215
  • [46] Boosting Lightweight CNNs Through Network Pruning and Knowledge Distillation for SAR Target Recognition
    Wang, Zhen
    Du, Lan
    Li, Yi
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 8386 - 8397
  • [47] A Simple and Effective Convolutional Filter Pruning based on Filter Dissimilarity Analysis
    Erick, F. X.
    Sawant, Shrutika S.
    Goeb, Stephan
    Holzer, N.
    Lang, E. W.
    Goetz, Th
    ICAART: PROCEEDINGS OF THE 14TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 3, 2022, : 139 - 145
  • [48] Filter pruning via expectation-maximization
    Xu, Sheng
    Li, Yanjing
    Yang, Linlin
    Zhang, Baochang
    Sun, Dianmin
    Liu, Kexin
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (15): : 12807 - 12818
  • [49] Filter pruning via expectation-maximization
    Sheng Xu
    Yanjing Li
    Linlin Yang
    Baochang Zhang
    Dianmin Sun
    Kexin Liu
    Neural Computing and Applications, 2022, 34 : 12807 - 12818
  • [50] ONLINE FILTER WEAKENING AND PRUNING FOR EFFICIENT CONVNETS
    Zhou, Zhengguang
    Zhou, Wengang
    Hong, Richang
    Li, Houqiang
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,