Speeding-up pruning for Artificial Neural Networks: Introducing Accelerated Iterative Magnitude Pruning

被引:8
|
作者
Zullich, Marco [1 ]
Medvet, Eric [1 ]
Pellegrino, Felice Andrea [1 ]
Ansuini, Alessio [2 ]
机构
[1] Univ Trieste, Dept Engn & Architecture, Trieste, Italy
[2] AREA Sci Pk, Trieste, Italy
关键词
Artificial Neural Network; Convolutional Neural Network; Neural Network Pruning; Magnitude Pruning; Lottery Ticket Hypothesis;
D O I
10.1109/ICPR48806.2021.9412705
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, Artificial Neural Networks (ANNs) pruning has become the focal point of many researches, due to the extreme overparametrization of such models. This has urged the scientific world to investigate methods for the simplification of the structure of weights in ANNs, mainly in an effort to reduce time for both training and inference. Frankle and Carbin In and later Renda, Frankle, and Carbin [2] introduced and refined an iterative pruning method which is able to effectively prune the network of a great portion of its parameters with little to no loss in performance. On the downside, this method requires a large amount of time for its application, since, for each iteration, the network has to be trained for (almost) the same amount of epochs of the unpruned network. In this work, we show that, for a limited setting, if targeting high overall sparsity rates, this time can be effectively reduced for each iteration, save for the last one, by more than 50 %, while yielding a final product (i.e., final pruned network) whose performance is comparable to the ANN obtained using the existing method.
引用
收藏
页码:3868 / 3875
页数:8
相关论文
共 50 条
  • [1] Artificial neural networks for speeding-up the experimental calibration of propulsion systems
    De Simio, Luigi
    Iannaccone, Sabato
    Iazzetta, Aniello
    Auriemma, Maddalena
    FUEL, 2023, 345
  • [2] Speeding-up convolutional neural networks: A survey
    Lebedev, V
    Lempitsky, V
    BULLETIN OF THE POLISH ACADEMY OF SCIENCES-TECHNICAL SCIENCES, 2018, 66 (06) : 799 - 810
  • [3] Magnitude and Uncertainty Pruning Criterion for Neural Networks
    Ko, Vinnie
    Oehmcke, Stefan
    Gieseke, Fabian
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 2317 - 2326
  • [4] Iterative clustering pruning for convolutional neural networks
    Chang, Jingfei
    Lu, Yang
    Xue, Ping
    Xu, Yiqun
    Wei, Zhen
    KNOWLEDGE-BASED SYSTEMS, 2023, 265
  • [5] An iterative pruning algorithm for feedforward neural networks
    Castellano, G
    Fanelli, AM
    Pelillo, M
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (03): : 519 - 531
  • [6] Speeding Up Neural Machine Translation Decoding by Cube Pruning
    Zhang, Wen
    Huang, Liang
    Feng, Yang
    Shen, Lei
    Liu, Qun
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 4284 - 4294
  • [7] Pruning Randomly Initialized Neural Networks with Iterative Randomization
    Chijiwa, Daiki
    Yamaguchi, Shin'ya
    Ida, Yasutoshi
    Umakoshi, Kenji
    Inoue, Tomohiro
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [8] Pruning Randomly Initialized Neural Networks with Iterative Randomization
    Chijiwa, Daiki
    Yamaguchi, Shinya
    Ida, Yasutoshi
    Umakoshi, Kenji
    Inoue, Tomohiro
    arXiv, 2021,
  • [9] Pruning Randomly Initialized Neural Networks with Iterative Randomization
    Chijiwa, Daiki
    Yamaguchi, Shin'ya
    Ida, Yasutoshi
    Umakoshi, Kenji
    Inoue, Tomohiro
    Advances in Neural Information Processing Systems, 2021, 6 : 4503 - 4513
  • [10] Gradient and Magnitude Based Pruning for Sparse Deep Neural Networks
    Belay, Kaleab
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 13126 - 13127