Maximum output discrepancy computation for convolutional neural network compression

被引:2
|
作者
Mo, Zihao [1 ]
Xiang, Weiming [1 ]
机构
[1] Augusta Univ, Sch Comp & Cyber Sci, 1120 15th St, Augusta, GA 30912 USA
基金
美国国家科学基金会;
关键词
Reachability analysis; Convolutional neural network; Discrepancy computation; Neural network compression; RECOGNITION;
D O I
10.1016/j.ins.2024.120367
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Network compression methods minimize the number of network parameters and computation costs while maintaining desired network performance. However, the safety assurance of many compression methods is based on a large amount of experimental data, whereas unforeseen incidents beyond the experiment data may result in unsafe consequences. In this work, we developed a discrepancy computation method for two convolutional neural networks by giving a concrete value to characterize the maximum output difference between the two networks after compression. Using Imagestar-based reachability analysis, we propose a novel method to merge the two networks to compute the difference. We illustrate reachability computation for each layer in the merged network, such as the convolution, max pooling, fully connected, and ReLU layers. We apply our method to a numerical example to prove its correctness. Furthermore, we implement our developed methods on the VGG16 model with the Quantization Aware Training (QAT) compression method; the results show that our approach can efficiently compute the accurate maximum output discrepancy between the original neural network and the compressed neural network.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Convolutional Neural Network with Inception Blocks for Image Compression Artifact Reduction
    Bhattacharya, Purbaditya
    Zoelzer, Udo
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [32] Dynamic Frame Resizing with Convolutional Neural Network for Efficient Video Compression
    Kim, Jaehwan
    Park, Youngo
    Choi, Kwang Pyo
    Lee, JongSeok
    Jeon, Sunyoung
    Park, JeongHoon
    APPLICATIONS OF DIGITAL IMAGE PROCESSING XL, 2017, 10396
  • [33] Neural Tangent Kernel Maximum Mean Discrepancy
    Cheng, Xiuyuan
    Xie, Yao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [34] MaxDropout: Deep Neural Network Regularization Based on Maximum Output Values
    Goncalves do Santos, Claudio Filipi
    Colombo, Danilo
    Roder, Mateus
    Papa, Joao Paulo
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2671 - 2676
  • [35] Lossless data compression with neural network based on maximum entropy theory
    Fu, Yan
    Zhou, Jun-Lin
    Wu, Yue
    Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, 2007, 36 (06): : 1245 - 1248
  • [36] Stereo Image Compression Using Recurrent Neural Network With A Convolutional Neural Network-Based Occlusion Detection
    Gul, M. Shahzeb Khan
    Suleman, Hamid
    Baetz, Michel
    Keinert, Joachim
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 126 - 132
  • [37] Parallel Deep Convolutional Neural Network Training by Exploiting the Overlapping of Computation and Communication
    Lee, Sunwoo
    Jha, Dipendra
    Agrawal, Ankit
    Choudhary, Alok
    Liao, Wei-keng
    2017 IEEE 24TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING (HIPC), 2017, : 183 - 192
  • [38] Refresh Triggered Computation: Improving the Energy Efficiency of Convolutional Neural Network Accelerators
    Jafri, Syed M. A. H.
    Hassan, Hasan
    Hemani, Ahmed
    Mutlu, Onur
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2021, 18 (01)
  • [39] An FPGA-Based Computation-Efficient Convolutional Neural Network Accelerator
    Archana, V. S.
    2022 IEEE INTERNATIONAL POWER AND RENEWABLE ENERGY CONFERENCE, IPRECON, 2022,
  • [40] A Sliding-Kernel Computation-In-Memory Architecture for Convolutional Neural Network
    Hu, Yushen
    Xie, Xinying
    Lei, Tengteng
    Shi, Runxiao
    Wong, Man
    ADVANCED SCIENCE, 2024, 11 (46)