Weight Quantization Method for Spiking Neural Networks and Analysis of Adversarial Robustness

被引:0
|
作者
Li Y. [1 ]
Li Y. [1 ]
Cui X. [3 ]
Ni Q. [1 ,2 ]
Zhou Y. [1 ]
机构
[1] Institute of Microelectronics, Chinese Academy of Sciences, Beijing
[2] School of Integrated Circuits, University of Chinese Academy of Sciences, Beijing
[3] School of Integrated Circuits, Peking University, Beijing
关键词
Adversarial attack; Adversarial robustness; Sparsity; Spiking Neural Network (SNN); Weight quantization;
D O I
10.11999/JEIT230300
中图分类号
学科分类号
摘要
Spiking Neural Networks (SNNs) in neuromorphic chips have the advantages of high sparsity and low power consumption, which make them suitable for visual classification tasks. However, they are still vulnerable to adversarial attacks. Existing studies lack robustness metrics for the quantization process when deploying the network into hardware. The weight quantization method of SNNs during hardware mapping is studied and the adversarial robustness is analyzed in this paper. A supervised training algorithm based on backpropagation and alternative gradients is proposed, and one types of adversarial attack samples, Fast Gradient Sign Method (FGSM), on the CIFAR-10 dataset are generated. A perception quantization method and an evaluation framework that integrates adversarial training and inference are provided innovatively. Experimental results show that direct encoding leads to the worst adversarial robustness in the VGG9 network. The difference between the accuracy loss and inter-layer pulse activity change before and after weight quantization increases by 73.23% and 51.5%, respectively, for four encoding and four structural parameter combinations. The impact of sparsity factors on robustness is: threshold increase more than bit reduction in weight quantization more than sparse coding. The proposed analysis framework and weight quantization method have been proved on the PIcore neuromorphic chip. © 2023 Science Press. All rights reserved.
引用
收藏
页码:3218 / 3227
页数:9
相关论文
共 19 条
  • [1] Tan Tieniu, The historyk, present and future of artificial intelligence. Chinese Academy of Sciences, (2019)
  • [2] LIU Aishan, LIU Xianglong, FAN Jiaxin, Et al., Perceptual-sensitive GAN for generating adversarial patches, The 33rd AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, (2019)
  • [3] ZHANG Guoming, YAN Chen, JI Xiaoyu, Et al., DolphinAttack: Inaudible voice commands[C], The 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 103-117, (2017)
  • [4] WARREN T., Microsoft’s Outlook spam email filters are broken for many right now, (2023)
  • [5] DONG Qingkuan, HE Junlin, Robustness enhancement method of deep learning model based on information bottleneck[J], Journal of Electronics & Information Technology, 45, 6, pp. 2197-2204, (2023)
  • [6] WEI Mingliang, YAYLA M, HO S Y, Et al., Binarized SNNs: Efficient and error-resilient spiking neural networks through binarization[C], 2021 IEEE/ACM International Conference on Computer Aided Design, pp. 1-9, (2021)
  • [7] EL-ALLAMI R, MARCHISIO A, SHAFIQUE M, Et al., Securing deep spiking neural networks against adversarial attacks through inherent structural parameters[C], 2021 Design, Automation & Test in Europe Conference & Exhibition, pp. 774-779, (2021)
  • [8] SHARMIN S, RATHI N, PANDA P, Et al., Inherent adversarial robustness of deep spiking neural networks: Effects of discrete input encoding and non-linear activations[C], The 16th European Conference, pp. 399-414, (2020)
  • [9] KUNDU S, PEDRAM M, BEEREL P A., HIRE-SNN: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise[C], 2021 IEEE/CVF International Conference on Computer Vision, pp. 5209-5218, (2021)
  • [10] KIM Y, PARK H, MOITRA A, Et al., Rate coding or direct coding: Which one is better for accurate, robust, and energy-efficient spiking neural networks?[C], 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 71-75, (2022)