Dynamic Network Structured Pruning via Feature Coefficients of Layer Fusion

被引:0
|
作者
Lu H. [1 ]
Yuan X. [1 ]
机构
[1] School of Automation, Nanjing University of Information Science and Technology, Nanjing
基金
中国国家自然科学基金;
关键词
Dynamic Parameters; Layer Fusion Feature Coefficient; Model Complexity; Structured Pruning;
D O I
10.16451/j.cnki.issn1003-6059.201911010
中图分类号
学科分类号
摘要
Pruning is an effective way to reduce the complexity of the model. In the existing pruning methods, only the influence of the convolutional layer on the feature map is taken into account, and therefore the redundant filter cannot be determined accurately. In this paper, a dynamic network structured pruning method based on layer fusion feature coefficients is proposed. Considering the influence of convolutional layer and Batch Norm layer on the feature map, the importance of the filter is determined by multiple dynamic parameters, and the redundant filter is dynamically searched to obtain the optimal network structure. Experiments on the standard datasets of CIFAR-10 and CIFAR-100 shows that both the residual network and the lightweight network maintain high accuracy while using large pruning rates.
引用
收藏
页码:1051 / 1059
页数:8
相关论文
共 21 条
  • [1] Denil M., Shakibi B., Dinh L., Et al., Predicting Parameters in Deep Learning, Proc of the 26th International Conference on Neural Information Processing Systems, pp. 2148-2156, (2013)
  • [2] Denton E., Zaremba W., Bruna J., Et al., Exploiting Linear Structure with Convolutional Networks for Efficient Evaluation, Proc of the 27th International Conference on Neural Information Processing Systems, pp. 1269-1277, (2014)
  • [3] Iandola F.N., Moskewicz M.W., Ashraf K., Et al., Squeeze-Net: AlexNet-Level Accuracy with 50x Fewer Parameters and < 0.5 MB Model Size
  • [4] Howard A.G., Zhu M.L., Chen B., Et al., Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications
  • [5] Zhang X.Y., Zhou X.Y., Lin M.X., Et al., ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices, Proc of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848-6856, (2018)
  • [6] Huang G., Liu Z., Van Der Maaten L., Et al., Densely Connected Convolutional Networks, Proc of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2261-2269, (2017)
  • [7] Hinton G., Vinyals O., Dean J., Distilling the Knowledge in a Neural Network
  • [8] Ba L.J., Caruana R., Do Deep Nets Really Need to be Deep, Proc of the 27th International Conference on Neural Information Processing Systems
  • [9] Courbariaux M., Bengio Y., David J.P., Binaryconnect: Training Deep Neural Networks with Binary Weights during Propagations, Proc of the 28th International Conference on Neural Information Processing Systems, pp. 3123-3131, (2015)
  • [10] Rastegari M., Ordonez V., Redmon J., Et al., Xnor-Net: Imagenet Classification Using Binary Convolutional Neural Networks, Proc of the European Conference on Computer Vision, pp. 525-542, (2016)