Robust Neural Pruning with Gradient Sampling Optimization for Residual Neural Networks

被引:0
|
作者
Yun, Juyoung [1 ]
机构
[1] SUNY Stony Brook, Dept Comp Sci, New York, NY 11794 USA
关键词
Neural Networks; Optimization; Neural Pruning;
D O I
10.1109/IJCNN60899.2024.10650301
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This research embarks on pioneering the integration of gradient sampling optimization techniques, particularly StochGradAdam, into the pruning process of neural networks. Our main objective is to address the significant challenge of maintaining accuracy in pruned neural models, critical in resource-constrained scenarios. Through extensive experimentation, we demonstrate that gradient sampling significantly preserves accuracy during and after the pruning process compared to traditional optimization methods. Our study highlights the pivotal role of gradient sampling in robust learning and maintaining crucial information post substantial model simplification. The results across CIFAR-10 datasets and residual neural architectures validate the versatility and effectiveness of our approach. This work presents a promising direction for developing efficient neural networks without compromising performance, even in environments with limited computational resources.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] HYDRA: Pruning Adversarially Robust Neural Networks
    Sehwag, Vikash
    Wang, Shiqi
    Mittal, Prateek
    Jana, Suman
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [2] Sparse optimization guided pruning for neural networks
    Shi, Yong
    Tang, Anda
    Niu, Lingfeng
    Zhou, Ruizhi
    NEUROCOMPUTING, 2024, 574
  • [3] Pruning of Deep Spiking Neural Networks through Gradient Rewiring
    Chen, Yanqi
    Yu, Zhaofei
    Fang, Wei
    Huang, Tiejun
    Tian, Yonghong
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 1713 - 1721
  • [4] Gradient and Magnitude Based Pruning for Sparse Deep Neural Networks
    Belay, Kaleab
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 13126 - 13127
  • [5] Reconsideration to pruning and regularization for complexity optimization in neural networks
    Park, H
    Lee, H
    ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 1649 - 1653
  • [6] Pruning Adversarially Robust Neural Networks without Adversarial Examples
    Jian, Tong
    Wang, Zifeng
    Wang, Yanzhi
    Dy, Jennifer
    Ioannidis, Stratis
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 993 - 998
  • [7] Fine-Grained Channel Pruning for Deep Residual Neural Networks
    Chen, Siang
    Huang, Kai
    Xiong, Dongliang
    Li, Bowen
    Claesen, Luc
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 3 - 14
  • [8] Channel pruning based on mean gradient for accelerating Convolutional Neural Networks
    Liu, Congcong
    Wu, Huaming
    SIGNAL PROCESSING, 2019, 156 : 84 - 91
  • [9] Structure optimization strategy of Neural Networks - Research on pruning algorithm
    Zhang, M
    Xu, YM
    PROCEEDINGS OF THE 3RD WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-5, 2000, : 877 - 881
  • [10] A pruning method for neural networks and its application for optimization in electromagnetic
    Guimaraes, FG
    Ramírez, JA
    IEEE TRANSACTIONS ON MAGNETICS, 2004, 40 (02) : 1160 - 1163