Design Strategies of Capacitor-Based Synaptic Cell for High-Efficiency Analog Neural Network Training

被引:0
|
作者
Lee, Byoungwoo [1 ]
Ji, Wonjae [1 ]
Kim, Hyejin [1 ]
Han, Seungmin [1 ]
Park, Geonwoong [2 ]
Hur, Pyeongkang [1 ]
Jeon, Gilsu [3 ]
Lee, Hyung-Min [4 ]
Chung, Yoonyoung [3 ]
Son, Junwoo [1 ]
Noh, Yong-Young [2 ]
Kim, Seyoung [1 ]
机构
[1] POSTECH, Dept Mat Sci & Engn, Pohang 37673, South Korea
[2] POSTECH, Dept Chem Engn, Pohang 37673, South Korea
[3] POSTECH, Dept Elect Engn, Pohang 37673, South Korea
[4] Korea Univ, Sch Elect Engn, Seoul 02841, South Korea
基金
新加坡国家研究基金会;
关键词
analog computing; capacitor-based synaptic cell; crossbar array; in-memory computing; resistive processing unit; thin-film transistors; IN-MEMORY; DEVICES; ARRAY; ACCELERATION; TRANSISTOR; AI;
D O I
10.1002/aisy.202400600
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Analog in-memory computing, leveraging resistive switching cross-point devices known as resistive processing units (RPUs), offers substantial improvements in the performance and energy efficiency of deep neural network (DNN) training. Among the promising candidates for RPU devices, the capacitor-based synaptic circuit stands out due to its near-ideal switching characteristics. However, despite its potential, challenges such as large cell areas and retention issues remain to be addressed. In this work, we study the three-transistors-one-capacitor synaptic cell design, aiming to enhance computing performance and scalability. Through comprehensive device-level modeling and system-level simulation, assessment is done on how the transistor characteristics influence DNN training accuracy and reveal critical design strategies. A novel cell design methodology that optimizes computing performance while minimizing cell area is proposed, thereby enhancing scalability. Additionally, development guidelines for cell components are provided, identifying oxide-based semiconductors as a promising channel material for transistors. This research contributes valuable insights for the development of future analog DNN training accelerators using capacitor-based synaptic cell, with a focus on addressing the current limitations and maximizing efficiency.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] A Fully Analog Memristor-Based Neural Network with Online Gradient Training
    Rosenthal, Eyal
    Greshnikov, Sergey
    Soudry, Daniel
    Kvatinsky, Shahar
    2016 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2016, : 1394 - 1397
  • [32] Design of wideband high-efficiency power amplifier based on microstrip filter matching network with hybrid rings
    Xu, Sen
    Wu, Jianfeng
    Chen, Xiang
    MICROELECTRONICS JOURNAL, 2025, 158
  • [33] Analog Deep Neural Network Based on NOR Flash Computing Array for High Speed/Energy Efficiency Computation
    Xiang, Y. C.
    Huang, P.
    Zhou, Z.
    Han, R. Z.
    Jiang, Y. N.
    Shu, Q. M.
    Su, Z. Q.
    Liu, Y. B.
    Liu, X. Y.
    Kang, J. F.
    2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2019,
  • [34] High-efficiency and low-energy ship recognition strategy based on spiking neural network in SAR images
    Xie, Hongtu
    Jiang, Xinqiao
    Hu, Xiao
    Wu, Zhitao
    Wang, Guoqian
    Xie, Kai
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [35] In-sensor neural network for high energy efficiency analog-to-information conversion
    Sudarsan Sadasivuni
    Sumukh Prashant Bhanushali
    Imon Banerjee
    Arindam Sanyal
    Scientific Reports, 12
  • [36] High-Efficiency Rectifier with Wide Input Power Rage Based on a Small Capacitor in Parallel with the Diode
    Wu, Pengde
    Chen, Xiaojie
    Lin, Hang
    Liu, Changjun
    2019 IEEE MTT-S INTERNATIONAL MICROWAVE SYMPOSIUM (IMS), 2019, : 1316 - 1319
  • [37] In-sensor neural network for high energy efficiency analog-to-information conversion
    Sadasivuni, Sudarsan
    Bhanushali, Sumukh Prashant
    Banerjee, Imon
    Sanyal, Arindam
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [38] Enhanced Bi-Prediction With Convolutional Neural Network for High-Efficiency Video Coding
    Zhao, Zhenghui
    Wang, Shiqi
    Wang, Shanshe
    Zhang, Xinfeng
    Ma, Siwei
    Yang, Jiansheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (11) : 3291 - 3301
  • [39] Efficient Training of Supervised Spiking Neural Network via Accurate Synaptic-Efficiency Adjustment Method
    Xie, Xiurui
    Qu, Hong
    Yi, Zhang
    Kurths, Jurgen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (06) : 1411 - 1424
  • [40] Network Structure Optimization and High-Efficiency Implementation of Skynet Based on FPGA
    Tang W.-W.
    Zhong S.
    Lu J.-Y.
    Yan L.-X.
    Tan F.-Z.
    Zhou X.
    Xu W.-H.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2023, 51 (02): : 314 - 323