Linearly Replaceable Filters for Deep Network Channel Pruning

被引:0
|
作者
Joo, Donggyu [1 ]
Yi, Eojindl [1 ]
Baek, Sunghyun [1 ]
Kim, Junmo [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon, South Korea
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have achieved remarkable results; however, despite the development of deep learning, practical user applications are fairly limited because heavy networks can be used solely with the latest hardware and software supports. Therefore, network pruning is gaining attention for general applications in various fields. This paper proposes a novel channel pruning method, Linearly Replaceable Filter (LRF), which suggests that a filter that can be approximated by the linear combination of other filters is replaceable. Moreover, an additional method calledWeights Compensation is proposed to support the LRF method. This is a technique that effectively reduces the output difference caused by removing filters via direct weight modification. Through various experiments, we have confirmed that our method achieves state-of-the-art performance in several benchmarks. In particular, on ImageNet, LRF-60 reduces approximately 56% of FLOPs on ResNet-50 without top-5 accuracy drop. Further, through extensive analyses, we proved the effectiveness of our approaches.
引用
收藏
页码:8021 / 8029
页数:9
相关论文
共 50 条
  • [31] Neural network pruning based on channel attention mechanism
    Hu, Jianqiang
    Liu, Yang
    Wu, Keshou
    CONNECTION SCIENCE, 2022, 34 (01) : 2201 - 2218
  • [32] Channel pruning based on convolutional neural network sensitivity
    Yang, Chenbin
    Liu, Huiyi
    NEUROCOMPUTING, 2022, 507 : 97 - 106
  • [33] Model Compression Based on Differentiable Network Channel Pruning
    Zheng, Yu-Jie
    Chen, Si-Bao
    Ding, Chris H. Q.
    Luo, Bin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 10203 - 10212
  • [34] Revisiting Random Channel Pruning for Neural Network Compression
    Li, Yawei
    Adamczewski, Kamil
    Li, Wen
    Gu, Shuhang
    Timofte, Radu
    Van Gool, Luc
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 191 - 201
  • [35] Dynamic Network Pruning with Interpretable Layerwise Channel Selection
    Wang, Yulong
    Zhang, Xiaolu
    Hu, Xiaolin
    Zhang, Bo
    Su, Hang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 6299 - 6306
  • [36] Transfer channel pruning for compressing deep domain adaptation models
    Yu, Chaohui
    Wang, Jindong
    Chen, Yiqiang
    Qin, Xin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2019, 10 (11) : 3129 - 3144
  • [37] Pruning and quantization for deep neural network acceleration: A survey
    Liang, Tailin
    Glossner, John
    Wang, Lei
    Shi, Shaobo
    Zhang, Xiaotong
    NEUROCOMPUTING, 2021, 461 : 370 - 403
  • [38] A Discriminant Information Approach to Deep Neural Network Pruning
    Hou, Zejiang
    Kung, Sun-Yuan
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9553 - 9560
  • [39] MULTI-LOSS-AWARE CHANNEL PRUNING OF DEEP NETWORKS
    Hu, Yiming
    Sun, Siyang
    Li, Jianquan
    Zhu, Jiagang
    Wang, Xingang
    Gu, Qingyi
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 889 - 893
  • [40] Deep Neural Network Pruning Using Persistent Homology
    Watanabe, Satoru
    Yamana, Hayato
    2020 IEEE THIRD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2020), 2020, : 153 - 156