Advances and Trends on On-Chip Compute-in-Memory Macros and Accelerators

被引:0
|
作者
Seo, Jae-sun [1 ,2 ]
机构
[1] Arizona State Univ, Sch ECEE, Tempe, AZ 85281 USA
[2] Meta Real Labs, Tempe, AZ 85287 USA
关键词
Compute-in-memory; AI; accelerator; ASIC;
D O I
10.1109/DAC56929.2023.10248014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conventional AI accelerators have been bottle-necked by high volumes of data movement and accesses required between memory and compute units. A transformative approach that has emerged to address this in compute-in-memory (CIM) architectures, which perform computation in-place inside the volatile or non-volatile memory in an analog or digital manner, greatly reducing the data transfers and memory accesses. This paper presents recent advances and trends on CIM macros and CIM-based accelerator designs.
引用
收藏
页数:2
相关论文
共 50 条
  • [31] Memristive Devices for Time Domain Compute-in-Memory
    Freye, Florian
    Lou, Jie
    Bengel, Christopher
    Menzel, Stephan
    Wiefels, Stefan
    Gemmeke, Tobias
    IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS, 2022, 8 (02): : 119 - 127
  • [32] Analog Compute-in-Memory For AI Edge Inference
    Fick, D.
    2022 INTERNATIONAL ELECTRON DEVICES MEETING, IEDM, 2022,
  • [33] RRAM for Compute-in-Memory: From Inference to Training
    Yu, Shimeng
    Shim, Wonbo
    Peng, Xiaochen
    Luo, Yandong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (07) : 2753 - 2765
  • [34] Memory and Compute-in-Memory Based on Ferroelectric Field Effect Transistors
    Liu Y.
    Li T.
    Zhu X.
    Yang H.
    Li X.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2023, 45 (09): : 3083 - 3097
  • [35] DNN plus NeuroSim: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators with Versatile Device Technologies
    Peng, Xiaochen
    Huang, Shanshi
    Luo, Yandong
    Sun, Xiaoyu
    Yu, Shimeng
    2019 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM), 2019,
  • [36] TelaMalloc: Efficient On-Chip Memory Allocation for Production Machine Learning Accelerators
    Maas, Martin
    Beaugnon, Ulysse
    Chauhan, Arun
    Ilbeyi, Berkin
    PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS, VOL 1, ASPLOS 2023, 2023, : 123 - 137
  • [37] CMOS-compatible compute-in-memory accelerators based on integrated ferroelectric synaptic arrays for convolution neural networks
    Kim, Min-Kyu
    Kim, Ik-Jyae
    Lee, Jang-Sik
    SCIENCE ADVANCES, 2022, 8 (14)
  • [38] Recent Advances in Compute-in-Memory Support for SRAM Using Monolithic 3-D Integration
    Zhang, Zhixiao
    Si, Xin
    Srinivasa, Srivatsa
    Ramanathan, Akshay Krishna
    Chang, Meng-Fan
    IEEE MICRO, 2019, 39 (06) : 28 - 37
  • [39] Multi-Bank On-Chip Memory Management Techniques for CNN Accelerators
    Kang, Duseok
    Kang, Donghyun
    Ha, Soonhoi
    IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (05) : 1181 - 1193
  • [40] BNN-Flip: Enhancing the Fault Tolerance and Security of Compute-in-Memory Enabled Binary Neural Network Accelerators
    Malhotra, Akul
    Wang, Chunguang
    Gupta, Sumeet Kumar
    29TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2024, 2024, : 146 - 152