Parallel hyperparameter optimization of spiking neural networks

被引:1
|
作者
Firmin, Thomas [1 ]
Boulet, Pierre [1 ]
Talbi, El-Ghazali [1 ]
机构
[1] Univ Lille, CNRS, UMR 9189, Cent Lille,Inria,CRIStAL, F-59000 Lille, France
关键词
Spiking neural networks; Hyperparameter optimization; Parallel asynchronous optimization; Bayesian optimization; STDP; SLAYER; ON-CHIP; CLASSIFICATION; DEEPER; MODEL;
D O I
10.1016/j.neucom.2024.128483
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hyperparameter optimization of spiking neural networks (SNNs) is a difficult task which has not yet been deeply investigated in the literature. In this work, we designed a scalable constrained Bayesian based optimization algorithm that prevents sampling in non-spiking areas of an efficient high dimensional search space. These search spaces contain infeasible solutions that output no or only a few spikes during the training or testing phases, we call such a mode a "silent network". Finding them is difficult, as many hyperparameters are highly correlated to the architecture and to the dataset. We leverage silent networks by designing a spike- based early stopping criterion to accelerate the optimization process of SNNs trained by spike timing dependent plasticity and surrogate gradient. We parallelized the optimization algorithm asynchronously, and ran largescale experiments on heterogeneous multi-GPU Petascale architecture. Results show that by considering silent networks, we can design more flexible high-dimensional search spaces while maintaining a good efficacy. The optimization algorithm was able to focus on networks with high performances by preventing costly and worthless computation of silent networks.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Hyperparameter Optimization Techniques for Designing Software Sensors Based on Artificial Neural Networks
    Blume, Sebastian
    Benedens, Tim
    Schramm, Dieter
    SENSORS, 2021, 21 (24)
  • [42] Efficient Hyperparameter Optimization of Convolutional Neural Networks on Classification of Early Pulmonary Nodules
    Lima, Lucas L.
    Ferreira, Jose R., Jr.
    Oliveira, Marcelo C.
    2019 IEEE 32ND INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS), 2019, : 144 - 149
  • [43] Optimization Techniques for Conversion of Quantization Aware Trained Deep Neural Networks to Lightweight Spiking Neural Networks
    Lee, Kyungchul
    Choi, Sunghyun
    Lew, Dongwoo
    Park, Jongsun
    2021 36TH INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS AND COMMUNICATIONS (ITC-CSCC), 2021,
  • [44] Realization of Fault Tolerance for Spiking Neural Networks with Particle Swarm Optimization
    Feng, Ruibin
    Leung, Chi-Sing
    Tsang, Peter
    NEURAL INFORMATION PROCESSING, PT II, 2015, 9490 : 79 - 86
  • [45] Evolutionary Features and Parameter Optimization of Spiking Neural Networks for Unsupervised Learning
    Silva, Marco
    Koshiyama, Adriano
    Vellasco, Marley
    Cataldo, Edson
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 2391 - 2398
  • [46] Third Generation Neural Networks: Spiking Neural Networks
    Ghosh-Dastidar, Samanwoy
    Adeli, Hojjat
    ADVANCES IN COMPUTATIONAL INTELLIGENCE, 2009, 61 : 167 - +
  • [47] Spike Trains Encoding Optimization for Spiking Neural Networks Implementation in FPGA
    Fang, Biao
    Zhang, Yuhao
    Yan, Rui
    Tang, Huajin
    2020 12TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2020, : 412 - 418
  • [48] Solving Quadratic Unconstrained Binary Optimization with Collaborative Spiking Neural Networks
    Fang, Yan
    Lele, Ashwin Sanjay
    2022 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING, ICRC, 2022, : 84 - 88
  • [49] Optimization of Spiking Neural Networks Based on Binary Streamed Rate Coding
    Al-Hamid, Ali A.
    Kim, HyungWon
    ELECTRONICS, 2020, 9 (10) : 1 - 17
  • [50] Selection and Optimization of Temporal Spike Encoding Methods for Spiking Neural Networks
    Petro, Balint
    Kasabov, Nikola
    Kiss, Rita M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (02) : 358 - 370