A Ternary Neural Network Computing-in-Memory Processor With 16T1C Bitcell Architecture

被引:5
|
作者
Jeong, Hoichang [1 ]
Kim, Seungbin [2 ]
Park, Keonhee [2 ]
Jung, Jueun [1 ]
Lee, Kyuho Jason [3 ]
机构
[1] Ulsan Natl Inst Sci & Technol, Dept Elect Engn, Ulsan 44919, South Korea
[2] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Ulsan 44919, South Korea
[3] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Dept Elect Engn, Ulsan 44919, South Korea
基金
新加坡国家研究基金会;
关键词
Computer architecture; Throughput; Neural networks; Linearity; Energy efficiency; Common Information Model (computing); Transistors; SRAM; computing-in-memory (CIM); processing-in-memory (PIM); ternary neural network (TNN); analog computing; SRAM MACRO; COMPUTATION; BINARY;
D O I
10.1109/TCSII.2023.3265064
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A highly energy-efficient Computing-in-Memory (CIM) processor for Ternary Neural Network (TNN) acceleration is proposed in this brief. Previous CIM processors for multi-bit precision neural networks showed low energy efficiency and throughput. Lightweight binary neural networks were accelerated with CIM processors for high energy efficiency but showed poor inference accuracy. In addition, most previous works suffered from poor linearity of analog computing and energy-consuming analog-to-digital conversion. To resolve the issues, we propose a Ternary-CIM (T-CIM) processor with 16T1C ternary bitcell for good linearity with the compact area and a charge-based partial sum adder circuit to remove analog-to-digital conversion that consumes a large portion of the system energy. Furthermore, flexible data mapping enables execution of the whole convolution layers with smaller bitcell memory capacity. Designed with 65 nm CMOS technology, the proposed T-CIM achieves 1,316 GOPS of peak performance and 823 TOPS/W of energy efficiency.
引用
收藏
页码:1739 / 1743
页数:5
相关论文
共 50 条
  • [21] UL-CNN: An Ultra-Lightweight Convolutional Neural Network Aiming at Flash-Based Computing-In-Memory Architecture for Pedestrian Recognition
    Yang, Chen
    Zhang, Jingyu
    Chen, Qi
    Xu, Yi
    Lu, Cimang
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2021, 30 (02)
  • [22] An In-Memory-Computing Binary Neural Network Architecture With In-Memory Batch Normalization
    Rege, Prathamesh Prashant
    Yin, Ming
    Parihar, Sanjay
    Versaggi, Joseph
    Nemawarkar, Shashank
    IEEE ACCESS, 2024, 12 : 190889 - 190896
  • [23] Design of Computing-in-Memory (CIM) with Vertical Split-Gate Flash Memory for Deep Neural Network (DNN) Inference Accelerator
    Lue, Hang-Ting
    Hu, Han-Wen
    Hsu, Tzu-Hsuan
    Hsu, Po-Kai
    Wang, Keh-Chung
    Lu, Chih-Yuan
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [24] A 2.53μW/channel Event-Driven Neural Spike Sorting Processor with Sparsity-Aware Computing-In-Memory Macros
    Jiang, Hao
    Zheng, Jiapei
    Wang, Yunzhengmao
    Zhang, Jinshan
    Zhu, Haozhe
    Lyu, Liangjian
    Chen, Yingping
    Chen, Chixiao
    Liu, Qi
    2023 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS, 2023,
  • [25] VSDCA: A Voltage Sensing Differential Column Architecture Based on 1T2R RRAM Array for Computing-in-Memory Accelerators
    Jing, Zhaokun
    Yan, Bonan
    Yang, Yuchao
    Huang, Ru
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2022, 69 (10) : 4028 - 4041
  • [26] A 28-nm Floating-Point Computing-in-Memory Processor Using Intensive-CIM Sparse-Digital Architecture
    Yan, Shengzhe
    Yue, Jinshan
    He, Chaojie
    Wang, Zi
    Cong, Zhaori
    He, Yifan
    Zhou, Mufeng
    Sun, Wenyu
    Li, Xueqing
    Dou, Chunmeng
    Zhang, Feng
    Yang, Huazhong
    Liu, Yongpan
    Liu, Ming
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2024, 59 (08) : 2630 - 2643
  • [27] T-EAP: Trainable Energy-Aware Pruning for NVM-based Computing-in-Memory Architecture
    Chang, Cheng-Yang
    Chuang, Yu-Chuan
    Chou, Kuang-Chao
    Wu, An-Yeu
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 78 - 81
  • [28] Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?
    Yan, Zheyu
    Hu, Xiaobo Sharon
    Shi, Yiyu
    2022 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD, 2022,
  • [29] A Skyrmion Racetrack Memory based Computing In-memory Architecture for Binary Neural Convolutional Network
    Pan, Yu
    Ouyang, Peng
    Zhao, Yinglin
    Yin, Shouyi
    Zhang, Youguang
    Wei, Shaojun
    Zhao, Weisheng
    GLSVLSI '19 - PROCEEDINGS OF THE 2019 ON GREAT LAKES SYMPOSIUM ON VLSI, 2019, : 271 - 274
  • [30] Spintronic Computing-in-Memory Architecture Based on Voltage-Controlled Spin-Orbit Torque Devices for Binary Neural Networks
    Wang, Haotian
    Kang, Wang
    Pan, Biao
    Zhang, He
    Deng, Erya
    Zhao, Weisheng
    IEEE TRANSACTIONS ON ELECTRON DEVICES, 2021, 68 (10) : 4944 - 4950