A Ternary Neural Network Computing-in-Memory Processor With 16T1C Bitcell Architecture

被引:5
|
作者
Jeong, Hoichang [1 ]
Kim, Seungbin [2 ]
Park, Keonhee [2 ]
Jung, Jueun [1 ]
Lee, Kyuho Jason [3 ]
机构
[1] Ulsan Natl Inst Sci & Technol, Dept Elect Engn, Ulsan 44919, South Korea
[2] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Ulsan 44919, South Korea
[3] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Dept Elect Engn, Ulsan 44919, South Korea
基金
新加坡国家研究基金会;
关键词
Computer architecture; Throughput; Neural networks; Linearity; Energy efficiency; Common Information Model (computing); Transistors; SRAM; computing-in-memory (CIM); processing-in-memory (PIM); ternary neural network (TNN); analog computing; SRAM MACRO; COMPUTATION; BINARY;
D O I
10.1109/TCSII.2023.3265064
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A highly energy-efficient Computing-in-Memory (CIM) processor for Ternary Neural Network (TNN) acceleration is proposed in this brief. Previous CIM processors for multi-bit precision neural networks showed low energy efficiency and throughput. Lightweight binary neural networks were accelerated with CIM processors for high energy efficiency but showed poor inference accuracy. In addition, most previous works suffered from poor linearity of analog computing and energy-consuming analog-to-digital conversion. To resolve the issues, we propose a Ternary-CIM (T-CIM) processor with 16T1C ternary bitcell for good linearity with the compact area and a charge-based partial sum adder circuit to remove analog-to-digital conversion that consumes a large portion of the system energy. Furthermore, flexible data mapping enables execution of the whole convolution layers with smaller bitcell memory capacity. Designed with 65 nm CMOS technology, the proposed T-CIM achieves 1,316 GOPS of peak performance and 823 TOPS/W of energy efficiency.
引用
收藏
页码:1739 / 1743
页数:5
相关论文
共 50 条
  • [1] Spatial-Temporal Hybrid Neural Network With Computing-in-Memory Architecture
    Bai, Kangjun
    Liu, Lingjia
    Yi, Yang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (07) : 2850 - 2862
  • [2] SpinCIM: spin orbit torque memory for ternary neural networks based on the computing-in-memory architecture
    Luo, Lichuan
    Liu, Dijun
    Zhang, He
    Zhang, Youguang
    Bai, Jinyu
    Kang, Wang
    CCF TRANSACTIONS ON HIGH PERFORMANCE COMPUTING, 2022, 4 (04) : 421 - 434
  • [3] SpinCIM: spin orbit torque memory for ternary neural networks based on the computing-in-memory architecture
    Lichuan Luo
    Dijun Liu
    He Zhang
    Youguang Zhang
    Jinyu Bai
    Wang Kang
    CCF Transactions on High Performance Computing, 2022, 4 : 421 - 434
  • [4] A Reconfigurable 1T1C eDRAM-based Spiking Neural Network Computing-In-Memory Processor for High System-Level Efficiency
    Kim, Seryeong
    Kim, Soyeon
    Um, Soyeon
    Kim, Sangjin
    Li, Zhiyong
    Kim, Sanyeob
    Jo, Wooyoung
    Yoo, Hoi-jun
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [5] TGBNN: Training Algorithm of Binarized Neural Network With Ternary Gradients for MRAM-Based Computing-in-Memory Architecture
    Fujiwara, Yuya
    Kawahara, Takayuki
    IEEE ACCESS, 2024, 12 : 150962 - 150974
  • [6] Memristor-based Deep Spiking Neural Network with a Computing-In-Memory Architecture
    Nowshin, Fabiha
    Yi, Yang
    PROCEEDINGS OF THE TWENTY THIRD INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2022), 2022, : 163 - 168
  • [7] Energy-efficient computing-in-memory architecture for AI processor: device, circuit, architecture perspective
    Liang CHANG
    Chenglong LI
    Zhaomin ZHANG
    Jianbiao XIAO
    Qingsong LIU
    Zhen ZHU
    Weihang LI
    Zixuan ZHU
    Siqi YANG
    Jun ZHOU
    Science China(Information Sciences), 2021, 64 (06) : 45 - 59
  • [8] Energy-efficient computing-in-memory architecture for AI processor: device, circuit, architecture perspective
    Liang Chang
    Chenglong Li
    Zhaomin Zhang
    Jianbiao Xiao
    Qingsong Liu
    Zhen Zhu
    Weihang Li
    Zixuan Zhu
    Siqi Yang
    Jun Zhou
    Science China Information Sciences, 2021, 64
  • [9] Energy-efficient computing-in-memory architecture for AI processor: device, circuit, architecture perspective
    Chang, Liang
    Li, Chenglong
    Zhang, Zhaomin
    Xiao, Jianbiao
    Liu, Qingsong
    Zhu, Zhen
    Li, Weihang
    Zhu, Zixuan
    Yang, Siqi
    Zhou, Jun
    SCIENCE CHINA-INFORMATION SCIENCES, 2021, 64 (06)
  • [10] Cryogenic Operation of Computing-In-Memory based Spiking Neural Network
    Shamieh, Laith A.
    Wang, Wei-Chun
    Zhang, Shida
    Saligram, Rakshith
    Gaidhane, Amol D.
    Cao, Yu
    Raychowdhury, Arijit
    Datta, Suman
    Mukhopadhyay, Saibal
    PROCEEDINGS OF THE 29TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED 2024, 2024,