TFix: Exploiting the Natural Redundancy of Ternary Neural Networks for Fault Tolerant In-Memory Vector Matrix Multiplication

被引:1
|
作者
Malhotra, Akul [1 ]
Wang, Chunguang [1 ]
Gupta, Sumeet Kumar [1 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
关键词
In-Memory Computing; Vector Matrix Multiplication; Ternary Deep Neural Networks; Fault Tolerance;
D O I
10.1109/DAC56929.2023.10247835
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In-memory computing (IMC) and quantization have emerged as promising techniques for edge-based deep neural network (DNN) accelerators by reducing their energy, latency and storage requirements. In pursuit of ultra-low precision, ternary precision DNNs (TDNNs) offer high efficiency without sacrificing much inference accuracy. In this work, we explore the impact of hard faults on IMC based TDNNs and propose TFix to enhance their fault tolerance. TFix exploits the natural redundancy present in most ternary IMC bitcells as well as the high weight sparsity in TDNNs to provide up to 40.68% accuracy increase over the baseline with < 6% energy overhead.
引用
收藏
页数:6
相关论文
共 38 条
  • [1] Dual in-memory computing of matrix-vector multiplication for accelerating neural networks
    Wang, Shiqing
    Sun, Zhong
    DEVICE, 2024, 2 (12):
  • [2] Eidetic: An In-Memory Matrix Multiplication Accelerator for Neural Networks
    Eckert, Charles
    Subramaniyan, Arun
    Wang, Xiaowei
    Augustine, Charles
    Iyer, Ravishankar
    Das, Reetuparna
    IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (06) : 1539 - 1553
  • [3] Noise tolerant ternary weight deep neural networks for analog in-memory inference
    Doevenspeck, Jonas
    Vrancx, Peter
    Laubeuf, Nathan
    Mallik, Arindam
    Debacker, Peter
    Verkest, Diederik
    Lauwereins, Rudy
    Dehaene, Wim
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [4] Time Complexity of In-Memory Matrix-Vector Multiplication
    Sun, Zhong
    Huang, Ru
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2021, 68 (08) : 2785 - 2789
  • [5] Digital in-memory stochastic computing architecture for vector-matrix multiplication
    Agwa, Shady
    Prodromakis, Themis
    FRONTIERS IN NANOTECHNOLOGY, 2023, 5
  • [6] In-Memory Binary Vector-Matrix Multiplication Based on Complementary Resistive Switches
    Ziegler, Tobias
    Waser, Rainer
    Wouters, Dirk J.
    Menzel, Stephan
    ADVANCED INTELLIGENT SYSTEMS, 2020, 2 (10)
  • [7] Approximate Ternary Matrix Multiplication for Image Processing and Neural Networks
    Krishna, L. Hemanth
    Srinivasu, B.
    2024 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI, ISVLSI, 2024, : 290 - 295
  • [8] A Study on Redundant Computation of Matrix-Vector Product for Fault-Tolerant Neural Networks
    Yoshida, Len
    Kaneko, Haruhiko
    2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TW), 2019,
  • [9] Memristor Crossbar Array with Enhanced Device Yield for In-Memory Vector-Matrix Multiplication
    Kim, Tae-Hyeon
    Kim, Sungjoon
    Park, Jinwoo
    Youn, Sangwook
    Kim, Hyungjin
    ACS APPLIED ELECTRONIC MATERIALS, 2024, 6 (06) : 4099 - 4107
  • [10] FAT: An In-Memory Accelerator With Fast Addition for Ternary Weight Neural Networks
    Zhu, Shien
    Duong, Luan H. K.
    Chen, Hui
    Liu, Di
    Liu, Weichen
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (03) : 781 - 794