Analog Weights in ReRAM DNN Accelerators

被引:0
|
作者
Eshraghian, Jason K. [1 ]
Kang, Sung-Mo [2 ]
Baek, Seungbum [3 ]
Orchard, Garrick [4 ,5 ]
Iu, Herbert Ho-Ching [1 ]
Lei, Wen [1 ]
机构
[1] Univ Western Australia, Sch Elect Elect & Comp Engn, Crawley, WA 6009, Australia
[2] Univ Calif Santa Cruz, Baskin Sch Engn, Santa Cruz, CA 95064 USA
[3] Chungbuk Natl Univ, Coll Elect & Comp Engn, Cheongju 362763, South Korea
[4] Natl Univ Singapore, Temasek Labs, Singapore 117411, Singapore
[5] Natl Univ Singapore, Singapore Inst Neurotechnol, Singapore 117411, Singapore
关键词
accelerator; analog; memristor; neural network; ReRAM;
D O I
10.1109/aicas.2019.8771550
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial neural networks have become ubiquitous in modern life, which has triggered the emergence of a new class of application specific integrated circuits for their acceleration. ReRAM-based accelerators have gained significant traction due to their ability to leverage in-memory computations. In a crossbar structure, they can perform multiply-and-accumulate operations more efficiently than standard CMOS logic. By virtue of being resistive switches, ReRAM switches can only reliably store one of two states. This is a severe limitation on the range of values in a computational kernel. This paper presents a novel scheme in alleviating the single-bit-per-device restriction by exploiting frequency dependence of v-i plane hysteresis, and assigning kernel information not only to the device conductance but also partially distributing it to the frequency of a time-varying input. We show this approach reduces average power consumption for a single crossbar convolution by up to a factor of x16 for an unsigned 8-bit input image, where each convolutional process consumes a worst-case of 1.1mW, and reduces area by a factor of x8, without reducing accuracy to the level of binarized neural networks. This presents a massive saving in computing cost when there are many simultaneous in-situ multiply-and-accumulate processes occurring across different crossbars.
引用
收藏
页码:267 / 271
页数:5
相关论文
共 50 条
  • [41] A DNN Protection Solution for PIM accelerators with Model Compression
    Zhao, Lei
    Zhang, Youtao
    Yang, Jun
    2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 320 - 325
  • [42] RIMAC: An Array-level ADC/DAC-free ReRAM-based In-Memory DNN Processor with Analog Cache and Computation
    Chen, Peiyu
    Wu, Meng
    Ma, Yufei
    Ye, Le
    Huang, Ru
    2023 28TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC, 2023, : 228 - 233
  • [43] Lightning Talk: Efficiency and Programmability of DNN Accelerators and GPUs
    Ro, Won Woo
    2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
  • [44] Special Session: Approximation and Fault Resiliency of DNN Accelerators
    Ahmadilivani, Mohammad Hasan
    Barbareschi, Mario
    Barone, Salvatore
    Bosio, Alberto
    Daneshtalab, Masoud
    Della Torca, Salvatore
    Gavarini, Gabriele
    Jenihhin, Maksim
    Raik, Jaan
    Ruospo, Annachiara
    Sanchez, Ernesto
    Taheri, Mahdi
    2023 IEEE 41ST VLSI TEST SYMPOSIUM, VTS, 2023,
  • [45] Fault Resilience Techniques for Flash Memory of DNN Accelerators
    Lu, Shyue-Kung
    Wu, Yu-Sheng
    Hong, Jin-Hua
    Miyase, Kohei
    2022 IEEE INTERNATIONAL TEST CONFERENCE IN ASIA (ITC-ASIA 2022), 2022, : 1 - 6
  • [46] NeuroSpector: Systematic Optimization of Dataflow Scheduling in DNN Accelerators
    Park, Chanho
    Kim, Bogil
    Ryu, Sungmin
    Song, William J.
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (08) : 2279 - 2294
  • [47] A survey on modeling and improving reliability of DNN algorithms and accelerators
    Mittal, Sparsh
    JOURNAL OF SYSTEMS ARCHITECTURE, 2020, 104
  • [48] AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch
    Danopoulos, Dimitrios
    Zervakis, Georgios
    Siozios, Kostas
    Soudris, Dimitrios
    Henkel, Joerg
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (06) : 2074 - 2078
  • [49] Fault Resilience Techniques for Flash Memory of DNN Accelerators
    Lu, Shyue-Kung
    Wu, Yu-Sheng
    Hong, Jin-Hua
    Miyase, Kohei
    2022 IEEE INTERNATIONAL TEST CONFERENCE (ITC), 2022, : 591 - 600
  • [50] Heterogeneous Dataflow Accelerators for Multi-DNN Workloads
    Kwon, Hyoukjun
    Lai, Liangzhen
    Pellauer, Michael
    Krishna, Tushar
    Chen, Yu-Hsin
    Chandra, Vikas
    2021 27TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2021), 2021, : 71 - 83