Auto-Balanced Filter Pruning for Efficient Convolutional Neural Networks

被引:0
|
作者
Ding, Xiaohan [1 ]
Ding, Guiguang [1 ]
Han, Jungong [2 ]
Tang, Sheng [3 ]
机构
[1] Tsinghua Univ, Sch Software, Beijing 100084, Peoples R China
[2] Univ Lancaster, Sch Comp & Commun, Lancaster LA1 4YW, England
[3] Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1%, 79.7% and 60.9%, respectively.
引用
收藏
页码:6797 / 6804
页数:8
相关论文
共 50 条
  • [1] Global balanced iterative pruning for efficient convolutional neural networks
    Chang, Jingfei
    Lu, Yang
    Xue, Ping
    Xu, Yiqun
    Wei, Zhen
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (23): : 21119 - 21138
  • [2] Global balanced iterative pruning for efficient convolutional neural networks
    Jingfei Chang
    Yang Lu
    Ping Xue
    Yiqun Xu
    Zhen Wei
    Neural Computing and Applications, 2022, 34 : 21119 - 21138
  • [3] Filter Pruning for Efficient Transfer Learning in Deep Convolutional Neural Networks
    Reinhold, Caique
    Roisenberg, Mauro
    ARTIFICIAL INTELLIGENCEAND SOFT COMPUTING, PT I, 2019, 11508 : 191 - 202
  • [4] Pruning feature maps for efficient convolutional neural networks
    Guo, Xiao-ting
    Xie, Xin-shu
    Lang, Xun
    OPTIK, 2023, 281
  • [5] Pruning convolutional neural networks via filter similarity analysis
    Lili Geng
    Baoning Niu
    Machine Learning, 2022, 111 : 3161 - 3180
  • [6] Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks
    He, Yang
    Dong, Xuanyi
    Kang, Guoliang
    Fu, Yanwei
    Yan, Chenggang
    Yang, Yi
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (08) : 3594 - 3604
  • [7] A Filter Rank Based Pruning Method for Convolutional Neural Networks
    Liu, Hao
    Guan, Zhenyu
    Lei, Peng
    2021 IEEE 20TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2021), 2021, : 1318 - 1322
  • [8] Pruning convolutional neural networks via filter similarity analysis
    Geng, Lili
    Niu, Baoning
    MACHINE LEARNING, 2022, 111 (09) : 3161 - 3180
  • [9] Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
    He, Yang
    Kang, Guoliang
    Dong, Xuanyi
    Fu, Yanwei
    Yang, Yi
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2234 - 2240
  • [10] Filter pruning for convolutional neural networks in semantic image segmentation
    Lopez-Gonzalez, Clara I.
    Gasco, Esther
    Barrientos-Espillco, Fredy
    Besada-Portas, Eva
    Pajares, Gonzalo
    NEURAL NETWORKS, 2024, 169 : 713 - 732