Quantization-Aware Training of Spiking Neural Networks for Energy-Efficient Spectrum Sensing on Loihi Chip

被引:1
|
作者
Liu, Shiya [1 ]
Mohammadi, Nima [1 ]
Yi, Yang [1 ]
机构
[1] Virginia Tech, Bradley Dept Elect & Comp Engn, Blacksburg, VA 24061 USA
基金
美国国家科学基金会;
关键词
Spectrum sensing; spiking neural networks; quantization; quantization-aware training; OPTIMIZATION;
D O I
10.1109/TGCN.2023.3337748
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Spectrum sensing is a technique used to identify idle/busy bandwidths in cognitive radio. Energy-efficient spectrum sensing is critical for multiple-input-multiple-output (MIMO) orthogonal-frequency-division multiplexing (OFDM) systems. In this paper, we propose the use of spiking neural networks (SNNs), which are more biologically plausible and energy-efficient than deep neural networks (DNNs), for spectrum sensing. The SNN models are implemented on the Loihi chip, which is better suited for SNNs than GPUs. Quantization is an effective technique to reduce the memory and energy consumption of SNNs. However, previous quantization methods for SNNs have suffered from accuracy degradation when compared to full-precision models. This degradation can be attributed to errors introduced by the coarse estimation of gradients in non-differentiable quantization layers. To address this issue, we introduce a quantization-aware training algorithm for SNNs running on Loihi. To mitigate errors caused by the poor estimation of gradients, we do not use a fixed configuration for the quantizer, as is common in existing SNN quantization methods. Instead, we make the scale parameters of the quantizer trainable. Furthermore, our proposed method adopts a probability-based scheme to selectively quantize individual layers within the network, rather than quantizing all layers simultaneously. Our experimental results demonstrate that high-performance and energy-efficient spectrum sensing can be achieved using Loihi.
引用
收藏
页码:827 / 838
页数:12
相关论文
共 50 条
  • [1] SQUAT: Stateful Quantization-Aware Training in Recurrent Spiking Neural Networks
    Venkatesh, Sreyes
    Marinescu, Razvan
    Eshraghian, Jason K.
    2024 NEURO INSPIRED COMPUTATIONAL ELEMENTS CONFERENCE, NICE, 2024,
  • [2] Low Precision Quantization-aware Training in Spiking Neural Networks with Differentiable Quantization Function
    Shymyrbay, Ayan
    Fouda, Mohammed E.
    Eltawil, Ahmed
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] A Robust, Quantization-Aware Training Method for Photonic Neural Networks
    Oikonomou, A.
    Kirtas, M.
    Passalis, N.
    Mourgias-Alexandris, G.
    Moralis-Pegios, M.
    Pleros, N.
    Tefas, A.
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EAAAI/EANN 2022, 2022, 1600 : 427 - 438
  • [4] Approximation- and Quantization-Aware Training for Graph Neural Networks
    Novkin, Rodion
    Klemme, Florian
    Amrouch, Hussam
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (02) : 599 - 612
  • [5] Quantization-aware training for low precision photonic neural networks
    Kirtas, M.
    Oikonomou, A.
    Passalis, N.
    Mourgias-Alexandris, G.
    Moralis-Pegios, M.
    Pleros, N.
    Tefas, A.
    NEURAL NETWORKS, 2022, 155 : 561 - 573
  • [6] Unleashing Energy-Efficiency: Neural Architecture Search without Training for Spiking Neural Networks on Loihi Chip
    Liu, Shiya
    Yi, Yang
    2024 25TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED 2024, 2024,
  • [7] Mixed-precision quantization-aware training for photonic neural networks
    Kirtas, Manos
    Passalis, Nikolaos
    Oikonomou, Athina
    Moralis-Pegios, Miltos
    Giamougiannis, George
    Tsakyridis, Apostolos
    Mourgias-Alexandris, George
    Pleros, Nikolaos
    Tefas, Anastasios
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (29): : 21361 - 21379
  • [8] Mixed-precision quantization-aware training for photonic neural networks
    Manos Kirtas
    Nikolaos Passalis
    Athina Oikonomou
    Miltos Moralis-Pegios
    George Giamougiannis
    Apostolos Tsakyridis
    George Mourgias-Alexandris
    Nikolaos Pleros
    Anastasios Tefas
    Neural Computing and Applications, 2023, 35 : 21361 - 21379
  • [9] TRAINING DEEP SPIKING NEURAL NETWORKS FOR ENERGY-EFFICIENT NEUROMORPHIC COMPUTING
    Srinivasan, Gopalakrishnan
    Lee, Chankyu
    Sengupta, Abhronil
    Panda, Priyadarshini
    Sarwar, Syed Shakib
    Roy, Kaushik
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8549 - 8553
  • [10] Phase-limited quantization-aware training for diffractive deep neural networks
    Wang, Yu
    Sha, Qi
    Qi, Feng
    APPLIED OPTICS, 2025, 64 (06) : 1413 - 1419