A Training-Efficient Hybrid-Structured Deep Neural Network With Reconfigurable Memristive Synapses

被引:28
|
作者
Bai, Kangjun [1 ]
An, Qiyuan [1 ]
Liu, Lingjia [1 ]
Yi, Yang [1 ]
机构
[1] Virginia Tech, Dept Elect & Comp Engn, Blacksburg, VA 24061 USA
关键词
Chaotic time-series forecasting; deep neural network (DNN); delay feedback system; hybrid neural network; image classification; memristor; reservoir computing; speech recognition; CHIP; PROCESSOR; FEEDBACK;
D O I
10.1109/TVLSI.2019.2942267
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The continued success in the development of neuromorphic computing has immensely pushed today's artificial intelligence forward. Deep neural networks (DNNs), a brainlike machine learning architecture, rely on the intensive vector-matrix computation with extraordinary performance in data-extensive applications. Recently, the nonvolatile memory (NVM) crossbar array uniquely has unvailed its intrinsic vector-matrix computation with parallel computing capability in neural network designs. In this article, we design and fabricate a hybrid-structured DNN (hybrid-DNN), combining both depth-in-space (spatial) and depth-in-time (temporal) deep learning characteristics. Our hybrid-DNN employs memristive synapses working in a hierarchical information processing fashion and delay-based spiking neural network (SNN) modules as the readout layer. Our fabricated prototype in 130-nm CMOS technology along with experimental results demonstrates its high computing parallelism and energy efficiency with low hardware implementation cost, making the designed system a candidate for low-power embedded applications. From chaotic time-series forecasting benchmarks, our hybrid-DNN exhibits 1.16x- 13.77 x reduction on the prediction error compared to the state-of-the-art DNN designs. Moreover, our hybrid-DNN records 99.03% and 99.63% testing accuracy on the handwritten digit classification and the spoken digit recognition tasks, respectively.
引用
收藏
页码:62 / 75
页数:14
相关论文
共 50 条
  • [1] A memristive deep belief neural network based on silicon synapses
    Wang, Wei
    Danial, Loai
    Li, Yang
    Herbelin, Eric
    Pikhay, Evgeny
    Roizin, Yakov
    Hoffer, Barak
    Wang, Zhongrui
    Kvatinsky, Shahar
    NATURE ELECTRONICS, 2022, 5 (12) : 870 - 880
  • [2] A memristive deep belief neural network based on silicon synapses
    Wei Wang
    Loai Danial
    Yang Li
    Eric Herbelin
    Evgeny Pikhay
    Yakov Roizin
    Barak Hoffer
    Zhongrui Wang
    Shahar Kvatinsky
    Nature Electronics, 2022, 5 : 870 - 880
  • [3] Efficient training for the hybrid optical diffractive deep neural network
    Fang, Tao
    Lia, Jingwei
    Wu, Tongyu
    Cheng, Ming
    Dong, Xiaowen
    AI AND OPTICAL DATA SCIENCES III, 2022, 12019
  • [4] Memristive Neural Network with Efficient In-Situ Supervised Training
    Prajaprati, Santlal
    Mondal, Manobendra Nath
    Sur-Kolay, Susmita
    2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 71 - 76
  • [5] Hybrid Neural Network for Efficient Training
    Hossain, Md. Billal
    Islam, Sayeed
    Zhumur, Noor-e-Hafsa
    Khanam, Najmoon Nahar
    Khan, Md. Imran
    Kabir, Md. Ahasan
    2017 INTERNATIONAL CONFERENCE ON ELECTRICAL, COMPUTER AND COMMUNICATION ENGINEERING (ECCE), 2017, : 528 - 532
  • [6] A High Energy Efficient Reconfigurable Hybrid Neural Network Processor for Deep Learning Applications
    Yin, Shouyi
    Ouyang, Peng
    Tang, Shibin
    Tu, Fengbin
    Li, Xiudong
    Zheng, Shixuan
    Lu, Tianyi
    Gu, Jiangyuan
    Liu, Leibo
    Wei, Shaojun
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2018, 53 (04) : 968 - 982
  • [7] A Memory-Efficient Hybrid Parallel Framework for Deep Neural Network Training
    Li, Dongsheng
    Li, Shengwei
    Lai, Zhiquan
    Fu, Yongquan
    Ye, Xiangyu
    Cai, Lei
    Qiao, Linbo
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 35 (04) : 577 - 591
  • [8] Memory Efficient Deep Neural Network Training
    Shilova, Alena
    EURO-PAR 2021: PARALLEL PROCESSING WORKSHOPS, 2022, 13098 : 515 - 519
  • [9] Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
    Choi, Wooseok
    Kwak, Myonghoon
    Kim, Seyoung
    Hwang, Hyunsang
    FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [10] RESPARC: A Reconfigurable and Energy-Efficient Architecture with Memristive Crossbars for Deep Spiking Neural Networks
    Ankit, Aayush
    Sengupta, Abhronil
    Panda, Priyadarshini
    Roy, Kaushik
    PROCEEDINGS OF THE 2017 54TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2017,