Discriminative Layer Pruning for Convolutional Neural Networks

被引:28
|
作者
Jordao, Artur [1 ]
Lie, Maiko [1 ]
Schwartz, William Robson [1 ]
机构
[1] Univ Fed Minas Gerais, Dept Comp Sci, Smart Sense Lab, BR-31270901 Belo Horizonte, MG, Brazil
关键词
Computer architecture; Estimation; Convolutional neural networks; Computational efficiency; Internet of Things; Visualization; Network compression; network pruning; convolutional neural networks;
D O I
10.1109/JSTSP.2020.2975987
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The predictive ability of convolutional neural networks (CNNs) can be improved by increasing their depth. However, increasing depth also increases computational cost significantly, in terms of both floating point operations and memory consumption, hindering applicability on resource-constrained systems such as mobile and internet of things (IoT) devices. Fortunately, most networks have spare capacity, that is, they require fewer parameters than they actually have to perform accurately. This motivates network compression methods, which remove or quantize parameters to improve resource-efficiency. In this work, we consider a straightforward strategy for removing entire convolutional layers to reduce network depth. Since it focuses on depth, this approach not only reduces memory usage, but also reduces prediction time significantly by mitigating the serialization overhead incurred by forwarding through consecutive layers. We show that a simple subspace projection approach can be employed to estimate the importance of network layers, enabling the pruning of CNNs to a resource-efficient depth within a given network size constraint. We estimate importance on a subspace computed using Partial Least Squares, a feature projection approach that preserves discriminative information. Consequently, this importance estimation is correlated to the contribution of the layer to the classification ability of the model. We show that cascading discriminative layer pruning with filter-oriented pruning improves the resource-efficiency of the resulting network compared to using any of them alone, and that it outperforms state-of-the-art methods. Moreover, we show that discriminative layer pruning alone, without cascading, achieves competitive resource-efficiency compared to methods that prune filters from all layers.
引用
收藏
页码:828 / 837
页数:10
相关论文
共 50 条
  • [41] Soft Taylor Pruning for Accelerating Deep Convolutional Neural Networks
    Rong, Jintao
    Yu, Xiyi
    Zhang, Mingyang
    Ou, Linlin
    IECON 2020: THE 46TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2020, : 5343 - 5349
  • [42] Reborn Filters: Pruning Convolutional Neural Networks with Limited Data
    Tang, Yehui
    You, Shan
    Xu, Chang
    Han, Jin
    Qian, Chen
    Shi, Boxin
    Xu, Chao
    Zhang, Changshui
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5972 - 5980
  • [43] FRACTIONAL STEP DISCRIMINANT PRUNING: A FILTER PRUNING FRAMEWORK FOR DEEP CONVOLUTIONAL NEURAL NETWORKS
    Gkalelis, Nikolaos
    Mezaris, Vasileios
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2020,
  • [44] 1xN Pattern for Pruning Convolutional Neural Networks
    Lin, Mingbao
    Zhang, Yuxin
    Li, Yuchao
    Chen, Bohong
    Chao, Fei
    Wang, Mengdi
    Li, Shen
    Tian, Yonghong
    Ji, Rongrong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 3999 - 4008
  • [45] Filter pruning for convolutional neural networks in semantic image segmentation
    Lopez-Gonzalez, Clara I.
    Gasco, Esther
    Barrientos-Espillco, Fredy
    Besada-Portas, Eva
    Pajares, Gonzalo
    NEURAL NETWORKS, 2024, 169 : 713 - 732
  • [46] Compressing Convolutional Neural Networks by Pruning Density Peak Filters
    Jang, Yunseok
    Lee, Sangyoun
    Kim, Jaeseok
    IEEE ACCESS, 2021, 9 : 8278 - 8285
  • [47] Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks
    Dosovitskiy, Alexey
    Fischer, Philipp
    Springenberg, Jost Tobias
    Riedmiller, Martin
    Brox, Thomas
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (09) : 1734 - 1747
  • [48] Joint Supervision for Discriminative Feature Learning in Convolutional Neural Networks
    Guo, Jianyuan
    Yuan, Yuhui
    Zhang, Chao
    COMPUTER VISION, PT II, 2017, 772 : 509 - 520
  • [49] Extended Siamese Convolutional Neural Networks for Discriminative Feature Learning
    Lee, Sangyun
    Hong, Sungjun
    INTERNATIONAL JOURNAL OF FUZZY LOGIC AND INTELLIGENT SYSTEMS, 2022, 22 (04) : 339 - 349
  • [50] Tutor-Instructing Global Pruning for Accelerating Convolutional Neural Networks
    Yu, Fang
    Cui, Li
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 2792 - 2799