Linearly Replaceable Filters for Deep Network Channel Pruning

被引:0
|
作者
Joo, Donggyu [1 ]
Yi, Eojindl [1 ]
Baek, Sunghyun [1 ]
Kim, Junmo [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon, South Korea
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have achieved remarkable results; however, despite the development of deep learning, practical user applications are fairly limited because heavy networks can be used solely with the latest hardware and software supports. Therefore, network pruning is gaining attention for general applications in various fields. This paper proposes a novel channel pruning method, Linearly Replaceable Filter (LRF), which suggests that a filter that can be approximated by the linear combination of other filters is replaceable. Moreover, an additional method calledWeights Compensation is proposed to support the LRF method. This is a technique that effectively reduces the output difference caused by removing filters via direct weight modification. Through various experiments, we have confirmed that our method achieves state-of-the-art performance in several benchmarks. In particular, on ImageNet, LRF-60 reduces approximately 56% of FLOPs on ResNet-50 without top-5 accuracy drop. Further, through extensive analyses, we proved the effectiveness of our approaches.
引用
收藏
页码:8021 / 8029
页数:9
相关论文
共 50 条
  • [1] PRF: deep neural network compression by systematic pruning of redundant filters
    Sarvani, C.H.
    Ghorai, Mrinmoy
    Basha, S. H. Shabbeer
    Neural Computing and Applications, 2024, 36 (33) : 20607 - 20616
  • [2] A framework for deep neural network multiuser authorization based on channel pruning
    Wang, Linna
    Song, Yunfei
    Zhu, Yujia
    Xia, Daoxun
    Han, Guoquan
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (21):
  • [3] Deep Neural Network Channel Pruning Compression Method for Filter Elasticity
    Li, Ruiquan
    Zhu, Lu
    Liu, Yuanyuan
    Computer Engineering and Applications, 2024, 60 (06) : 163 - 171
  • [4] Structural Watermarking to Deep Neural Networks via Network Channel Pruning
    Zhao, Xiangyu
    Yao, Yinzhe
    Wu, Hanzhou
    Zhang, Xinpeng
    2021 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2021, : 14 - 19
  • [5] Pruning of Network Filters for Small Dataset
    Li, Zhuang
    Xu, Lihong
    Zhu, Shuwei
    IEEE ACCESS, 2020, 8 : 4522 - 4533
  • [6] Collaborative Channel Pruning for Deep Networks
    Peng, Hanyu
    Wu, Jiaxiang
    Chen, Shifeng
    Huang, Junzhou
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [7] Network pruning via probing the importance of filters
    Kuang, Jiandong
    Shao, Mingwen
    Wang, Ran
    Zuo, Wangmeng
    Ding, Weiping
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (09) : 2403 - 2414
  • [8] Network Pruning Using Adaptive Exemplar Filters
    Lin, Mingbao
    Ji, Rongrong
    Li, Shaojie
    Wang, Yan
    Wu, Yongjian
    Huang, Feiyue
    Ye, Qixiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (12) : 7357 - 7366
  • [9] Network pruning via probing the importance of filters
    Jiandong Kuang
    Mingwen Shao
    Ran Wang
    Wangmeng Zuo
    Weiping Ding
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 2403 - 2414
  • [10] Replacing replaceable filters
    Ashley, S
    MECHANICAL ENGINEERING, 1995, 117 (11) : 38 - 38