Linearly Replaceable Filters for Deep Network Channel Pruning

被引:0
|
作者
Joo, Donggyu [1 ]
Yi, Eojindl [1 ]
Baek, Sunghyun [1 ]
Kim, Junmo [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon, South Korea
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have achieved remarkable results; however, despite the development of deep learning, practical user applications are fairly limited because heavy networks can be used solely with the latest hardware and software supports. Therefore, network pruning is gaining attention for general applications in various fields. This paper proposes a novel channel pruning method, Linearly Replaceable Filter (LRF), which suggests that a filter that can be approximated by the linear combination of other filters is replaceable. Moreover, an additional method calledWeights Compensation is proposed to support the LRF method. This is a technique that effectively reduces the output difference caused by removing filters via direct weight modification. Through various experiments, we have confirmed that our method achieves state-of-the-art performance in several benchmarks. In particular, on ImageNet, LRF-60 reduces approximately 56% of FLOPs on ResNet-50 without top-5 accuracy drop. Further, through extensive analyses, we proved the effectiveness of our approaches.
引用
收藏
页码:8021 / 8029
页数:9
相关论文
共 50 条
  • [21] Overview of Deep Convolutional Neural Network Pruning
    Li, Guang
    Liu, Fang
    Xia, Yuping
    2020 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO PROCESSING AND ARTIFICIAL INTELLIGENCE, 2020, 11584
  • [22] Deep Capsule Network Based on Pruning Optimization
    Zheng X.-P.
    Liang X.
    Jisuanji Xuebao/Chinese Journal of Computers, 2022, 45 (07): : 1557 - 1570
  • [23] Conditional Automated Channel Pruning for Deep Neural Networks
    Liu, Yixin
    Guo, Yong
    Guo, Jiaxin
    Jiang, Luoqian
    Chen, Jian
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1275 - 1279
  • [24] Convolutional neural network acceleration algorithm based on filters pruning
    Li H.
    Zhao W.-J.
    Han B.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2019, 53 (10): : 1994 - 2002
  • [25] Channel Pruning for Accelerating Very Deep Neural Networks
    He, Yihui
    Zhang, Xiangyu
    Sun, Jian
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1398 - 1406
  • [26] CALPA-NET: Channel-Pruning-Assisted Deep Residual Network for Steganalysis of Digital Images
    Tan, Shunquan
    Wu, Weilong
    Shao, Zilong
    Li, Qiushi
    Li, Bin
    Huang, Jiwu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 131 - 146
  • [27] A lightweight deep neural network model and its applications based on channel pruning and group vector quantization
    Mingzhong Huang
    Yan Liu
    Lijie Zhao
    Guogang Wang
    Neural Computing and Applications, 2024, 36 : 5333 - 5346
  • [28] A lightweight deep neural network model and its applications based on channel pruning and group vector quantization
    Huang, Mingzhong
    Liu, Yan
    Zhao, Lijie
    Wang, Guogang
    NEURAL COMPUTING & APPLICATIONS, 2023, 36 (10): : 5333 - 5346
  • [29] Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks
    Roy, Sourjya
    Panda, Priyadarshini
    Srinivasan, Gopalakrishnan
    Raghunathan, Anand
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [30] Pruning CNN filters via quantifying the importance of deep visual representations
    Alqahtani, Ali
    Xie, Xianghua
    Jones, Mark W.
    Essa, Ehab
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2021, 208