An efficient deep neural network accelerator using controlled ferroelectric domain dynamics

被引:7
|
作者
Majumdar, Sayani [1 ]
机构
[1] VTT Tech Res Ctr Finland Ltd, POB 1000, FI-02044 Espoo, Finland
来源
基金
芬兰科学院;
关键词
ferroelectric tunnel junction; nonvolatile memory; ferroelectric domain dynamics; deep neural network accelerator; neuromorphic computing; in-memory computing; CROSSBAR ARRAYS; HAFNIUM-OXIDE; MEMORY; DESIGN; DEVICE;
D O I
10.1088/2634-4386/ac974d
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The current work reports an efficient deep neural network (DNN) accelerator, where analog synaptic weight elements are controlled by ferroelectric (FE) domain dynamics. An integrated device-to-algorithm framework for benchmarking novel synaptic devices is used. In poly(vinylidene fluoride-trifluoroethylene)-based ferroelectric tunnel junctions (FTJs), analog conductance states are measured using a custom pulsing protocol, and associated control circuits and array architectures for DNN training are simulated. Our results show that precise control of polarization switching dynamics in multi-domain polycrystalline FE thin films can produce considerable weight-update linearity in metal-ferroelectric-semiconductor (MFS) tunnel junctions. Ultrafast switching and low junction currents in these devices offer extremely energy-efficient operation. Via an integrated platform of hardware development, characterization and modeling, we predict the available conductance range, where linearity is expected under identical potentiating and depressing pulses for efficient DNN training and inference tasks. As an example, an analog crossbar-based DNN accelerator with MFS junctions as synaptic weight elements showed >93% training accuracy on a large MNIST handwritten digit dataset while, for cropped images, >95% accuracy is achieved. One observed challenge is the rather limited dynamic conductance range while operating under identical potentiating and depressing pulses below 1 V. Investigation is underway to improve the FTJ dynamic conductance range, maintaining the weight-update linearity under an identical pulse scheme.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Activation in Network for NoC-based Deep Neural Network Accelerator
    Zhu, Wenyao
    Chen, Yizhi
    Lu, Zhonghai
    2024 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI TSA, 2024,
  • [22] Deep transfer neural network using hybrid representations of domain discrepancy
    Lu, Changsheng
    Gu, Chaochen
    Wu, Kaijie
    Xia, Siyu
    Wang, Haotian
    Guan, Xinping
    NEUROCOMPUTING, 2020, 409 : 60 - 73
  • [23] RECOM: An Efficient Resistive Accelerator for Compressed Deep Neural Networks
    Ji, Houxiang
    Song, Linghao
    Jiang, Li
    Li, Ha
    Chen, Yiran
    PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2018, : 237 - 240
  • [24] CSDSE: An efficient design space exploration framework for deep neural network accelerator based on cooperative search
    Feng, Kaijie
    Fan, Xiaoya
    An, Jianfeng
    Wang, Haoyang
    Li, Chuxi
    NEUROCOMPUTING, 2025, 623
  • [25] A hardware-efficient computing engine for FPGA-based deep convolutional neural network accelerator
    Li, Xueming
    Huang, Hongmin
    Chen, Taosheng
    Gao, Huaien
    Hu, Xianghong
    Xiong, Xiaoming
    MICROELECTRONICS JOURNAL, 2022, 128
  • [26] An Energy-Efficient Deep Convolutional Neural Network Training Accelerator for In Situ Personalization on Smart Devices
    Choi, Seungkyu
    Sim, Jaehyeong
    Kang, Myeonggu
    Choi, Yeongjae
    Kim, Hyeonuk
    Kim, Lee-Sup
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2020, 55 (10) : 2691 - 2702
  • [27] MOSDA: On-chip memory optimized sparse deep neural network accelerator with efficient index matching
    Xu, Hongjie
    Shiomi, Jun
    Onodera, Hidetoshi
    IEEE Open Journal of Circuits and Systems, 2021, 2 : 144 - 155
  • [28] SONIC: A Sparse Neural Network Inference Accelerator with Silicon Photonics for Energy-Efficient Deep Learning
    Sunny, Febin
    Nikdast, Mandi
    Pasricha, Sudeep
    27TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2022, 2022, : 214 - 219
  • [29] UNPU: An Energy-Efficient Deep Neural Network Accelerator With Fully Variable Weight Bit Precision
    Lee, Jinmook
    Kim, Changhyeon
    Kang, Sanghoon
    Shin, Dongjoo
    Kim, Sangyeob
    Yoo, Hoi-Jun
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2019, 54 (01) : 173 - 185
  • [30] The Design and Implementation of Scalable Deep Neural Network Accelerator Cores
    Sakamoto, Ryuichi
    Takata, Ryo
    Ishii, Jun
    Kondo, Masaaki
    Nakamura, Hiroshi
    Ohkubo, Tetsui
    Kojima, Takuya
    Amano, Hideharu
    2017 IEEE 11TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP (MCSOC 2017), 2017, : 13 - 20