Hardware-aware Few-shot Learning on a Memristor-based Small-world Architecture

被引:0
|
作者
Raghunathan, Karthik Charan [1 ,2 ]
Demirag, Yigit [1 ,2 ]
Neftci, Emre [3 ,4 ]
Payvand, Melika [1 ,2 ]
机构
[1] Univ Zurich, Inst Neuroinformat, Zurich, Switzerland
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Forschungszentrum Julich, Peter Grunberg Inst, Aachen, Germany
[4] Rhein Westfal TH Aachen, Aachen, Germany
来源
2024 NEURO INSPIRED COMPUTATIONAL ELEMENTS CONFERENCE, NICE | 2024年
关键词
meta-learning; few-shot learning; small-world architecture; neuromorphic computing; spiking neural networks; memristor; MAML;
D O I
10.1109/NICE61972.2024.10548824
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from few examples (few-shot learning) is one of the hallmarks of mammalian intelligence. In the presented work, we demonstrate using simulations, on-chip few-shot learning on a recently-proposed Spiking Neural Network (SNN) hardware architecture, the Mosaic. The Mosaic exhibits a small-world property similar to that of a mammalian cortex, by virtue of its physical layout. Thanks to taking advantage of in-memory computing and routing along with local connectivity, the Mosaic is a highly efficient solution for routing information which is the main source of energy consumption in neural network accelerators, and specifically in neuromorphic hardware. We propose to meta-learn a small-world SNN resembling the Mosaic architecture for keyword spotting tasks using Model Agnostic Meta Learning (MAML) algorithm for adaptation on the edge and report the final accuracy on Spiking Heidelberg Digits dataset. Using simulations of hardware environment, we demonstrate 49.09 +/- 8.17% accuracy on five unseen classes with 5-shot data and single gradient update. Furthermore, bumping it to 10 gradient steps we achieve an accuracy of 67.97 +/- 1.99% on the same configuration. Our results show the applicability of MAML for analog substrates on the edge and highlight a few factors that impact the learning performance of such meta-learning models on neuromorphic substrates.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty
    Oh, Jaehoon
    Kim, Sungnyun
    Ho, Namgyu
    Kim, Jin-Hwa
    Song, Hwanjun
    Yun, Se-Young
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [22] Task-aware prototype refinement for improved few-shot learning
    Wei Zhang
    Xiaodong Gu
    Neural Computing and Applications, 2023, 35 : 17899 - 17913
  • [23] Mutually-aware feature learning for few-shot object counting
    Jeon, Yerim
    Lee, Subeen
    Kim, Jihwan
    Heo, Jae-Pil
    PATTERN RECOGNITION, 2025, 161
  • [24] A Novel Group-Aware Pruning Method for Few-shot Learning
    Zheng, Yin-Dong
    Ma, Yun-Tao
    Liu, Ruo-Ze
    Lu, Tong
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [25] Hierarchy-Aware Interactive Prompt Learning for Few-Shot Classification
    Yin, Xiaotian
    Wu, Jiamin
    Yang, Wenfei
    Zhou, Xu
    Zhang, Shifeng
    Zhang, Tianzhu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (12) : 12221 - 12232
  • [26] Task-aware prototype refinement for improved few-shot learning
    Zhang, Wei
    Gu, Xiaodong
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (24): : 17899 - 17913
  • [27] Category-Aware Siamese Learning Network for Few-Shot Segmentation
    Sun, Hui
    Zhang, Ziyan
    Huang, Lili
    Jiang, Bo
    Luo, Bin
    COGNITIVE COMPUTATION, 2024, 16 (03) : 924 - 935
  • [28] Task-aware Part Mining Network for Few-Shot Learning
    Wu, Jiamin
    Zhang, Tianzhu
    Zhang, Yongdong
    Wu, Feng
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8413 - 8422
  • [29] Improving Few-shot Learning by Spatially-aware Matching and CrossTransformer
    Zhang, Hongguang
    Torr, Philip H. S.
    Koniusz, Piotr
    COMPUTER VISION - ACCV 2022, PT V, 2023, 13845 : 3 - 20
  • [30] PARN: Position-Aware Relation Networks for Few-Shot Learning
    Wu, Ziyang
    Li, Yuwei
    Guo, Lihua
    Jia, Kui
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6658 - 6666