Sparsity Enables Data and Energy Efficient Spiking Convolutional Neural Networks

被引:5
|
作者
Bhatt, Varun [1 ]
Ganguly, Udayan [1 ]
机构
[1] Indian Inst Technol, Dept Elect Engn, Mumbai, Maharashtra, India
关键词
Sparse coding; Unsupervised learning; Feature extraction; Spiking neural networks; Training data efficiency; NEURONS;
D O I
10.1007/978-3-030-01418-6_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
I n recent days, deep learning has surpassed human performance in image recognition tasks. A major issue with deep learning systems is their reliance on large datasets for optimal performance. When presented with a new task, generalizing from low amounts of data becomes highly attractive. Research has shown that human visual cortex might employ sparse coding to extract features from the images that we see, leading to efficient usage of available data. To ensure good generalization and energy efficiency, we create a multi-layer spiking convolutional neural network which performs layer-wise sparse coding for unsupervised feature extraction. It is applied on MNIST dataset where it achieves 92.3% accuracy with just 500 data samples, which is 4x less than what vanilla CNNs need for similar values, while reaching 98.1% accuracy with full dataset. Only around 7000 spikes are used per image (6x reduction in transferred bits per forward pass compared to CNNs) implying high sparsity. Thus, we show that our algorithm ensures better sparsity, leading to improved data and energy efficiency in learning, which is essential for some real-world applications.
引用
收藏
页码:263 / 272
页数:10
相关论文
共 50 条
  • [31] Performance and Runtime Improvement of Spiking Convolutional Neural Networks
    Shirsavar, Shahriar Rezghi
    Dehaqani, Mohammad-Reza A.
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 801 - 805
  • [32] Energy Efficient Convolutional Neural Networks for EEG Artifact Detection
    Khatwani, Mohit
    Hosseini, M.
    Paneliya, H.
    Hairston, W. David
    Waytowich, Nicholas
    Mohsenin, Tinoosh
    2018 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (BIOCAS): ADVANCED SYSTEMS FOR ENHANCING HUMAN HEALTH, 2018, : 499 - 502
  • [33] Exploring Sparsity of Firing Activities and Clock Gating for Energy-Efficient Recurrent Spiking Neural Processors
    Liu, Yu
    Jin, Yingyezhe
    Li, Peng
    2017 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED), 2017,
  • [34] STICKER: An Energy-Efficient Multi-Sparsity Compatible Accelerator for Convolutional Neural Networks in 65-nm CMOS
    Yuan, Zhe
    Liu, Yongpan
    Yue, Jinshan
    Yang, Yixiong
    Wang, Jingyu
    Feng, Xiaoyu
    Zhao, Jian
    Li, Xueqing
    Yang, Huazhong
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2020, 55 (02) : 465 - 477
  • [35] A Flexible Sparsity-Aware Accelerator with High Sensitivity and Efficient Operation for Convolutional Neural Networks
    Haiying Yuan
    Zhiyong Zeng
    Junpeng Cheng
    Minghao Li
    Circuits, Systems, and Signal Processing, 2022, 41 : 4370 - 4389
  • [36] A Flexible Sparsity-Aware Accelerator with High Sensitivity and Efficient Operation for Convolutional Neural Networks
    Yuan, Haiying
    Zeng, Zhiyong
    Cheng, Junpeng
    Li, Minghao
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2022, 41 (08) : 4370 - 4389
  • [37] Natural gradient enables fast sampling in spiking neural networks
    Masset, Paul
    Zavatone-Veth, Jacob A.
    Connor, J. Patrick
    Murthy, Venkatesh N.
    Pehlevan, Cengiz
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [38] Investigating data representation for efficient and reliable Convolutional Neural Networks
    Ruospo, Annachiara
    Sanchez, Ernesto
    Traiola, Marcello
    O'Connor, Ian
    Bosio, Alberto
    MICROPROCESSORS AND MICROSYSTEMS, 2021, 86
  • [39] SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks
    Yin, Ruokai
    Moitra, Abhishek
    Bhattacharjee, Abhiroop
    Kim, Youngeun
    Panda, Priyadarshini
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (06) : 1926 - 1938
  • [40] Multi-Step Training Framework Using Sparsity Training for Efficient Utilization of Accumulated New Data in Convolutional Neural Networks
    Lee, Jeong Jun
    Kim, Hyun
    IEEE ACCESS, 2023, 11 : 129613 - 129622