An efficient deep neural network accelerator using controlled ferroelectric domain dynamics

被引:7
|
作者
Majumdar, Sayani [1 ]
机构
[1] VTT Tech Res Ctr Finland Ltd, POB 1000, FI-02044 Espoo, Finland
来源
基金
芬兰科学院;
关键词
ferroelectric tunnel junction; nonvolatile memory; ferroelectric domain dynamics; deep neural network accelerator; neuromorphic computing; in-memory computing; CROSSBAR ARRAYS; HAFNIUM-OXIDE; MEMORY; DESIGN; DEVICE;
D O I
10.1088/2634-4386/ac974d
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The current work reports an efficient deep neural network (DNN) accelerator, where analog synaptic weight elements are controlled by ferroelectric (FE) domain dynamics. An integrated device-to-algorithm framework for benchmarking novel synaptic devices is used. In poly(vinylidene fluoride-trifluoroethylene)-based ferroelectric tunnel junctions (FTJs), analog conductance states are measured using a custom pulsing protocol, and associated control circuits and array architectures for DNN training are simulated. Our results show that precise control of polarization switching dynamics in multi-domain polycrystalline FE thin films can produce considerable weight-update linearity in metal-ferroelectric-semiconductor (MFS) tunnel junctions. Ultrafast switching and low junction currents in these devices offer extremely energy-efficient operation. Via an integrated platform of hardware development, characterization and modeling, we predict the available conductance range, where linearity is expected under identical potentiating and depressing pulses for efficient DNN training and inference tasks. As an example, an analog crossbar-based DNN accelerator with MFS junctions as synaptic weight elements showed >93% training accuracy on a large MNIST handwritten digit dataset while, for cropped images, >95% accuracy is achieved. One observed challenge is the rather limited dynamic conductance range while operating under identical potentiating and depressing pulses below 1 V. Investigation is underway to improve the FTJ dynamic conductance range, maintaining the weight-update linearity under an identical pulse scheme.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] RNSiM: Efficient Deep Neural Network Accelerator Using Residue Number Systems
    Roohi, Arman
    Taheri, MohammadReza
    Angizi, Shaahin
    Fan, Deliang
    2021 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN (ICCAD), 2021,
  • [2] An Energy-Efficient Deep Neural Network Accelerator Design
    Jung, Jueun
    Lee, Kyuho Jason
    2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2020, : 272 - 276
  • [3] Efficient Hardware Accelerator for Compressed Sparse Deep Neural Network
    Xiao, Hao
    Zhao, Kaikai
    Liu, Guangzhu
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2021, E104D (05) : 772 - 775
  • [4] Energy-Efficient Ferroelectric Domain Wall Memory with Controlled Domain Switching Dynamics
    Wang, Chao
    Jiang, Jun
    Chai, Xiaojie
    Lian, Jianwei
    Hu, Xiaobing
    Jiang, An Quan
    ACS APPLIED MATERIALS & INTERFACES, 2020, 12 (40) : 44998 - 45004
  • [5] An Efficient Accelerator for Deep Convolutional Neural Networks
    Kuo, Yi-Xian
    Lai, Yeong-Kang
    2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN), 2020,
  • [6] CASA: A Convolution Accelerator using Skip Algorithm for Deep Neural Network
    Kim, Young Ho
    An, Gi Jo
    Sunwoo, Myung Hoon
    2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2019,
  • [7] Deep Neural Network Accelerator based on FPGA
    Thang Viet Huynh
    2017 4TH NAFOSTED CONFERENCE ON INFORMATION AND COMPUTER SCIENCE (NICS), 2017, : 254 - 257
  • [8] Design of an Efficient Deep Neural Network Accelerator Based on Block Posit Number Representation
    Hsiao, Shen-Fu
    Lin, Sin-Chen
    Chen, Guan-Lin
    Yang, Shih-Hua
    Yuan, Yen-Che
    Chen, Kun-Chih
    2024 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI TSA, 2024,
  • [9] Ascend: A Scalable and Energy-Efficient Deep Neural Network Accelerator With Photonic Interconnects
    Li, Yuan
    Wang, Ke
    Zheng, Hao
    Louri, Ahmed
    Karanth, Avinash
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2022, 69 (07) : 2730 - 2741
  • [10] An efficient stochastic computing based deep neural network accelerator with optimized activation functions
    Bodiwala S.
    Nanavati N.
    International Journal of Information Technology, 2021, 13 (3) : 1179 - 1192