A Ternary Neural Network Computing-in-Memory Processor With 16T1C Bitcell Architecture

被引:5
|
作者
Jeong, Hoichang [1 ]
Kim, Seungbin [2 ]
Park, Keonhee [2 ]
Jung, Jueun [1 ]
Lee, Kyuho Jason [3 ]
机构
[1] Ulsan Natl Inst Sci & Technol, Dept Elect Engn, Ulsan 44919, South Korea
[2] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Ulsan 44919, South Korea
[3] Ulsan Natl Inst Sci & Technol, Grad Sch Artificial Intelligence, Dept Elect Engn, Ulsan 44919, South Korea
基金
新加坡国家研究基金会;
关键词
Computer architecture; Throughput; Neural networks; Linearity; Energy efficiency; Common Information Model (computing); Transistors; SRAM; computing-in-memory (CIM); processing-in-memory (PIM); ternary neural network (TNN); analog computing; SRAM MACRO; COMPUTATION; BINARY;
D O I
10.1109/TCSII.2023.3265064
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
A highly energy-efficient Computing-in-Memory (CIM) processor for Ternary Neural Network (TNN) acceleration is proposed in this brief. Previous CIM processors for multi-bit precision neural networks showed low energy efficiency and throughput. Lightweight binary neural networks were accelerated with CIM processors for high energy efficiency but showed poor inference accuracy. In addition, most previous works suffered from poor linearity of analog computing and energy-consuming analog-to-digital conversion. To resolve the issues, we propose a Ternary-CIM (T-CIM) processor with 16T1C ternary bitcell for good linearity with the compact area and a charge-based partial sum adder circuit to remove analog-to-digital conversion that consumes a large portion of the system energy. Furthermore, flexible data mapping enables execution of the whole convolution layers with smaller bitcell memory capacity. Designed with 65 nm CMOS technology, the proposed T-CIM achieves 1,316 GOPS of peak performance and 823 TOPS/W of energy efficiency.
引用
收藏
页码:1739 / 1743
页数:5
相关论文
共 50 条
  • [41] CNNP-v2:An Energy Efficient Memory-Centric Convolutional Neural Network Processor Architecture
    Choi, Sungpill
    Bong, Kyeongryeol
    Han, Donghyeon
    Yoo, Hoi-Jun
    2019 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2019), 2019, : 38 - 41
  • [42] Scalable image sensor/processor architecture with frame memory buffer and 2-D cellular neural network
    Cho, KB
    Sheu, BJ
    Young, WC
    ISCAS '98 - PROCEEDINGS OF THE 1998 INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-6, 1998, : C73 - C76
  • [43] A negative capacitance FET based energy efficient 6T SRAM computing-in-memory (CiM) cell design for deep neural networks
    Birudu, Venu
    Yellampalli, Siva Sankar
    Vaddi, Ramesh
    MICROELECTRONICS JOURNAL, 2023, 139
  • [44] Ternary Output Binary Neural Network With Zero-Skipping for MRAM-Based Digital In-Memory Computing
    Na, Taehui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (07) : 2655 - 2659
  • [45] An Energy Efficient Computing-in-Memory Accelerator With 1T2R Cell and Fully Analog Processing for Edge AI Applications
    Zhou, Keji
    Zhao, Chenyang
    Fang, Jinbei
    Jiang, Jingwen
    Chen, Deyang
    Huang, Yujie
    Jing, Minge
    Han, Jun
    Tian, Haidong
    Xiong, Xiankui
    Liu, Qi
    Xue, Xiaoyong
    Zeng, Xiaoyang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2021, 68 (08) : 2932 - 2936
  • [46] A 3D MCAM architecture based on flash memory enabling binary neural network computing for edge AI
    Bai, Maoying
    Wu, Shuhao
    Wang, Hai
    Wang, Hua
    Feng, Yang
    Qi, Yueran
    Wang, Chengcheng
    Chai, Zheng
    Min, Tai
    Wu, Jixuan
    Zhan, Xuepeng
    Chen, Jiezhi
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (12)
  • [47] A 5.1pJ/Neuron 127.3us/Inference RNN-based Speech Recognition Processor using 16 Computing-in-Memory SRAM Macros in 65nm CMOS
    Guo, Ruiqi
    Liu, Yonggang
    Zheng, Shixuan
    Wu, Ssu-Yen
    Ouyang, Peng
    Khwa, Win-San
    Chen, Xi
    Chen, Jia-Jing
    Li, Xiudong
    Liu, Leibo
    Chang, Meng-Fan
    Wei, Shaojun
    Yin, Shouyi
    2019 SYMPOSIUM ON VLSI CIRCUITS, 2019, : C120 - C121
  • [48] A 3D MCAM architecture based on flash memory enabling binary neural network computing for edge AI
    Maoying BAI
    Shuhao WU
    Hai WANG
    Hua WANG
    Yang FENG
    Yueran QI
    Chengcheng WANG
    Zheng CHAI
    Tai MIN
    Jixuan WU
    Xuepeng ZHAN
    Jiezhi CHEN
    Science China(Information Sciences), 2024, 67 (12) : 302 - 310
  • [49] Using Many Small 1T1C Memory Arrays in a Large and Dense Multicore Processor
    Carlstedt, Gunnar
    Rimborg, Mats
    PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON MEMORY SYSTEMS, MEMSYS 2022, 2022,
  • [50] 2T1M-Based Double Memristive Crossbar Architecture for In-Memory Computing
    Vourkas, Ioannis
    Papandroulidakis, Georgios
    Sirakoulis, Georgios Ch.
    Abusleme, Angel
    INTERNATIONAL JOURNAL OF UNCONVENTIONAL COMPUTING, 2016, 12 (04) : 265 - 280