Dynamical Channel Pruning by Conditional Accuracy Change for Deep Neural Networks

被引:53
|
作者
Chen, Zhiqiang [1 ,2 ]
Xu, Ting-Bing [2 ,3 ]
Du, Changde [1 ,2 ]
Liu, Cheng-Lin [2 ,3 ,4 ]
He, Huiguang [2 ,4 ,5 ]
机构
[1] Chinese Acad Sci CASIA, Inst Automat, Res Ctr Brain Inspired Intelligence, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci UCAS, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] Chinese Acad Sci CASIA, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
[4] Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
[5] Chinese Acad Sci CASIA, Res Ctr Brain Inspired Intelligence, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Channel estimation; Logic gates; Computer architecture; Convolution; Biological neural networks; Automation; Conditional accuracy change (CAC); direct criterion; dynamical channel pruning; neural network compression; structure shaping;
D O I
10.1109/TNNLS.2020.2979517
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Channel pruning is an effective technique that has been widely applied to deep neural network compression. However, many existing methods prune from a pretrained model, thus resulting in repetitious pruning and fine-tuning processes. In this article, we propose a dynamical channel pruning method, which prunes unimportant channels at the early stage of training. Rather than utilizing some indirect criteria (e.g., weight norm, absolute weight sum, and reconstruction error) to guide connection or channel pruning, we design criteria directly related to the final accuracy of a network to evaluate the importance of each channel. Specifically, a channelwise gate is designed to randomly enable or disable each channel so that the conditional accuracy changes (CACs) can be estimated under the condition of each channel disabled. Practically, we construct two effective and efficient criteria to dynamically estimate CAC at each iteration of training; thus, unimportant channels can be gradually pruned during the training process. Finally, extensive experiments on multiple data sets (i.e., ImageNet, CIFAR, and MNIST) with various networks (i.e., ResNet, VGG, and MLP) demonstrate that the proposed method effectively reduces the parameters and computations of baseline network while yielding the higher or competitive accuracy. Interestingly, if we Double the initial Channels and then Prune Half (DCPH) of them to baseline's counterpart, it can enjoy a remarkable performance improvement by shaping a more desirable structure.
引用
收藏
页码:799 / 813
页数:15
相关论文
共 50 条
  • [1] Conditional Automated Channel Pruning for Deep Neural Networks
    Liu, Yixin
    Guo, Yong
    Guo, Jiaxin
    Jiang, Luoqian
    Chen, Jian
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1275 - 1279
  • [2] Channel Pruning for Accelerating Very Deep Neural Networks
    He, Yihui
    Zhang, Xiangyu
    Sun, Jian
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1398 - 1406
  • [3] Discrimination-aware Channel Pruning for Deep Neural Networks
    Zhuang, Zhuangwei
    Tan, Mingkui
    Zhuang, Bohan
    Liu, Jing
    Guo, Yong
    Wu, Qingyao
    Huang, Junzhou
    Zhu, Jinhui
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [4] Deeper Weight Pruning without Accuracy Loss in Deep Neural Networks
    Ahn, Byungmin
    Kim, Taewhan
    PROCEEDINGS OF THE 2020 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2020), 2020, : 73 - 78
  • [5] Identity-linked Group Channel Pruning for Deep Neural Networks
    Zhang, Chenxin
    Xu, Keqin
    Liu, Jie
    Kang, Liangyi
    Zhou, Zhiyang
    Ye, Dan
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] Fine-Grained Channel Pruning for Deep Residual Neural Networks
    Chen, Siang
    Huang, Kai
    Xiong, Dongliang
    Li, Bowen
    Claesen, Luc
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 3 - 14
  • [7] Compression of Deep Convolutional Neural Networks Using Effective Channel Pruning
    Guo, Qingbei
    Wu, Xiao-Jun
    Zhao, Xiuyang
    IMAGE AND GRAPHICS, ICIG 2019, PT I, 2019, 11901 : 760 - 772
  • [8] Structural Watermarking to Deep Neural Networks via Network Channel Pruning
    Zhao, Xiangyu
    Yao, Yinzhe
    Wu, Hanzhou
    Zhang, Xinpeng
    2021 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2021, : 14 - 19
  • [9] Collaborative Channel Pruning for Deep Networks
    Peng, Hanyu
    Wu, Jiaxiang
    Chen, Shifeng
    Huang, Junzhou
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [10] Methods for Pruning Deep Neural Networks
    Vadera, Sunil
    Ameen, Salem
    IEEE ACCESS, 2022, 10 : 63280 - 63300