A Ternary Neural Network Computing-in-Memory Processor With 16T1C Bitcell Architecture

被引:5
|
作者
Jeong, Hoichang [1 ]
Kim, Seungbin [2 ]
Park, Keonhee [2 ]
Jung, Jueun [1 ]
Lee, Kyuho Jason [3 ]
机构
[1] Ulsan Natl Inst Sci & Technol, Dept Elect Engn, Ulsan 44919, South Korea
[2] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Ulsan 44919, South Korea
[3] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Dept Elect Engn, Ulsan 44919, South Korea
基金
新加坡国家研究基金会;
关键词
Computer architecture; Throughput; Neural networks; Linearity; Energy efficiency; Common Information Model (computing); Transistors; SRAM; computing-in-memory (CIM); processing-in-memory (PIM); ternary neural network (TNN); analog computing; SRAM MACRO; COMPUTATION; BINARY;
D O I
10.1109/TCSII.2023.3265064
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A highly energy-efficient Computing-in-Memory (CIM) processor for Ternary Neural Network (TNN) acceleration is proposed in this brief. Previous CIM processors for multi-bit precision neural networks showed low energy efficiency and throughput. Lightweight binary neural networks were accelerated with CIM processors for high energy efficiency but showed poor inference accuracy. In addition, most previous works suffered from poor linearity of analog computing and energy-consuming analog-to-digital conversion. To resolve the issues, we propose a Ternary-CIM (T-CIM) processor with 16T1C ternary bitcell for good linearity with the compact area and a charge-based partial sum adder circuit to remove analog-to-digital conversion that consumes a large portion of the system energy. Furthermore, flexible data mapping enables execution of the whole convolution layers with smaller bitcell memory capacity. Designed with 65 nm CMOS technology, the proposed T-CIM achieves 1,316 GOPS of peak performance and 823 TOPS/W of energy efficiency.
引用
收藏
页码:1739 / 1743
页数:5
相关论文
共 50 条
  • [31] Trends and Challenges in Computing-in-Memory for Neural Network Model: A Review From Device Design to Application-Side Optimization
    Yu, Ke
    Kim, Sunmean
    Choi, Jun Rim
    IEEE ACCESS, 2024, 12 : 186679 - 186702
  • [32] Exploiting and Enhancing Computation Latency Variability for High-Performance Time-Domain Computing-in-Memory Neural Network Accelerators
    Wang, Chia-Chun
    Lo, Yun-Chen
    Wu, Jun-Shen
    Tsai, Yu-Chih
    Chang, Chia-Cheng
    Hsu, Tsen-Wei
    Chu, Min-Wei
    Lai, Chuan-Yao
    Liu, Ren-Shuo
    2023 IEEE 41ST INTERNATIONAL CONFERENCE ON COMPUTER DESIGN, ICCD, 2023, : 515 - 522
  • [33] YOLoC: DeploY Large-Scale Neural Network by ROM-based Computing-in-Memory using ResiduaL Branch on a Chip
    Chen, Yiming
    Yin, Guodong
    Tan, Zhanhong
    Lee, Mingyen
    Yang, Zekun
    Liu, Yongpan
    Yang, Huazhong
    Ma, Kaisheng
    Li, Xueqing
    PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 1093 - 1098
  • [34] Considerations of Integrating Computing-In-Memory and Processing-In-Sensor into Convolutional Neural Network Accelerators for Low-Power Edge Devices
    Tang, Kea-Tiong
    Wei, Wei-Chen
    Yeh, Zuo-Wei
    Hsu, Tzu-Hsiang
    Chiu, Yen-Cheng
    Xue, Cheng-Xin
    Kuo, Yu -Chun
    Wen, Tai-Hsing
    Ho, Mon-Shu
    Lo, Chung-Chuan
    Liu, Ren-Shuo
    Hsieh, Chih-Cheng
    Chang, Meng-Fan
    2019 SYMPOSIUM ON VLSI CIRCUITS, 2019, : T166 - T167
  • [35] An 11T1C Bit-Level-Sparsity-Aware Computing-in-Memory Macro With Adaptive Conversion Time and Computation Voltage
    Lin, Ye
    Li, Yuandong
    Zhang, Heng
    Ma, He
    Lv, Jingjing
    Jiang, Anying
    Du, Yuan
    Du, Li
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2024, 71 (11) : 4985 - 4995
  • [36] C2IM: A Compact Computing-In-Memory Unit of 10 Transistors with Standard 6T SRAM
    Ren, Erxiang
    Luo, Li
    Liu, Zheyu
    Wei, Qi
    Qiao, Fei
    2020 IEEE 33RD INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (SOCC), 2020, : 113 - 116
  • [37] In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM
    Huang, Jun-Ying
    Syu, Jing-Lin
    Tsou, Yao-Tung
    Kuo, Sy-Yen
    Chang, Ching-Ray
    ELECTRONICS, 2022, 11 (08)
  • [38] A 701.7 TOPS/W Compute-in-Memory Processor With Time-Domain Computing for Spiking Neural Network
    Park, Keonhee
    Jeong, Hoichang
    Kim, Seungbin
    Shin, Jeongmin
    Kim, Minseo
    Lee, Kyuho Jason
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2025, 72 (01) : 25 - 35
  • [39] An eDRAM-Based Computing-in-Memory Macro With Full-Valid-Storage and Channel-Wise-Parallelism for Depthwise Neural Network
    Qiao, Xin
    Yang, Youming
    Xue, Chang
    He, Yandong
    Cui, Xiaoxin
    Jia, Song
    Wang, Yuan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2024, 71 (05) : 2539 - 2543
  • [40] A scalable and reconfigurable in-memory architecture for ternary deep spiking neural network with ReRAM based neurons
    Lin, Jie
    Yuan, Jiann-Shiun
    NEUROCOMPUTING, 2020, 375 : 102 - 112