An efficient deep neural network accelerator using controlled ferroelectric domain dynamics

被引:7
|
作者
Majumdar, Sayani [1 ]
机构
[1] VTT Tech Res Ctr Finland Ltd, POB 1000, FI-02044 Espoo, Finland
来源
基金
芬兰科学院;
关键词
ferroelectric tunnel junction; nonvolatile memory; ferroelectric domain dynamics; deep neural network accelerator; neuromorphic computing; in-memory computing; CROSSBAR ARRAYS; HAFNIUM-OXIDE; MEMORY; DESIGN; DEVICE;
D O I
10.1088/2634-4386/ac974d
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The current work reports an efficient deep neural network (DNN) accelerator, where analog synaptic weight elements are controlled by ferroelectric (FE) domain dynamics. An integrated device-to-algorithm framework for benchmarking novel synaptic devices is used. In poly(vinylidene fluoride-trifluoroethylene)-based ferroelectric tunnel junctions (FTJs), analog conductance states are measured using a custom pulsing protocol, and associated control circuits and array architectures for DNN training are simulated. Our results show that precise control of polarization switching dynamics in multi-domain polycrystalline FE thin films can produce considerable weight-update linearity in metal-ferroelectric-semiconductor (MFS) tunnel junctions. Ultrafast switching and low junction currents in these devices offer extremely energy-efficient operation. Via an integrated platform of hardware development, characterization and modeling, we predict the available conductance range, where linearity is expected under identical potentiating and depressing pulses for efficient DNN training and inference tasks. As an example, an analog crossbar-based DNN accelerator with MFS junctions as synaptic weight elements showed >93% training accuracy on a large MNIST handwritten digit dataset while, for cropped images, >95% accuracy is achieved. One observed challenge is the rather limited dynamic conductance range while operating under identical potentiating and depressing pulses below 1 V. Investigation is underway to improve the FTJ dynamic conductance range, maintaining the weight-update linearity under an identical pulse scheme.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] DeltaRNN: A Power-efficient Recurrent Neural Network Accelerator
    Gao, Chang
    Neil, Daniel
    Ceolini, Enea
    Liu, Shih-Chii
    Delbruck, Tobi
    PROCEEDINGS OF THE 2018 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS (FPGA'18), 2018, : 21 - 30
  • [42] Efficient Classification of Hyperspectral Data Using Deep Neural Network Model
    Bamasaq O.
    Alghazzawi D.
    Alshehri S.
    Jamjoom A.
    Asghar M.Z.
    Human-centric Computing and Information Sciences, 2022, 12
  • [43] VWA: Hardware Efficient Vectorwise Accelerator for Convolutional Neural Network
    Chang, Kuo-Wei
    Chang, Tian-Sheuan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2020, 67 (01) : 145 - 154
  • [44] SDCNN: An Efficient Sparse Deconvolutional Neural Network Accelerator on FPGA
    Chang, Jung-Woo
    Kang, Keon-Woo
    Kang, Suk-Ju
    2019 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2019, : 968 - 971
  • [45] An Efficient Deep Network in Network Architecture for Image Classification on FPGA Accelerator.
    Alaeddine, Hmidi
    Jihene, Malek
    Khemaja, Maha
    2021 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW 2021), 2021, : 72 - 77
  • [46] Scalable Deep Neural Network Accelerator Cores with Cubic Integration using Through Chip Interface
    Sakamoto, Ryuichi
    Takata, Ryo
    Ishii, Jun
    Kondo, Masaaki
    Nakamura, Hiroshi
    Ohkubo, Tetsui
    Kojima, Takuya
    Amano, Hideharu
    PROCEEDINGS INTERNATIONAL SOC DESIGN CONFERENCE 2017 (ISOCC 2017), 2017, : 155 - 156
  • [47] Frequency Domain Learning Scheme for Massive MIMO Using Deep Neural Network
    Siyad, Ismayil C.
    Tamilselvan, S.
    Sneha, V. V.
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND CONTROL SYSTEMS (ICICCS 2020), 2020, : 1293 - 1300
  • [48] VLSI Implementation of Area-Efficient Parallelized Neural Network Accelerator Using Hashing Trick
    Yoo, Tae Koan
    Park, Jong Kang
    Kim, Jong Tae
    2019 INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2019, : 67 - 68
  • [49] Efficient field-programmable gate array-based reconfigurable accelerator for deep convolution neural network
    Hu, Xianghong
    Chen, Taosheng
    Huang, Hongmin
    Liu, Zihao
    Li, Xueming
    Xiong, Xiaoming
    ELECTRONICS LETTERS, 2021, 57 (06) : 238 - 240
  • [50] A scalable and efficient convolutional neural network accelerator using HLS for a system-on-chip design
    Bjerge, Kim
    Schougaard, Jonathan Horsted
    Larsen, Daniel Ejnar
    MICROPROCESSORS AND MICROSYSTEMS, 2021, 87