Analog Weights in ReRAM DNN Accelerators

被引:0
|
作者
Eshraghian, Jason K. [1 ]
Kang, Sung-Mo [2 ]
Baek, Seungbum [3 ]
Orchard, Garrick [4 ,5 ]
Iu, Herbert Ho-Ching [1 ]
Lei, Wen [1 ]
机构
[1] Univ Western Australia, Sch Elect Elect & Comp Engn, Crawley, WA 6009, Australia
[2] Univ Calif Santa Cruz, Baskin Sch Engn, Santa Cruz, CA 95064 USA
[3] Chungbuk Natl Univ, Coll Elect & Comp Engn, Cheongju 362763, South Korea
[4] Natl Univ Singapore, Temasek Labs, Singapore 117411, Singapore
[5] Natl Univ Singapore, Singapore Inst Neurotechnol, Singapore 117411, Singapore
关键词
accelerator; analog; memristor; neural network; ReRAM;
D O I
10.1109/aicas.2019.8771550
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial neural networks have become ubiquitous in modern life, which has triggered the emergence of a new class of application specific integrated circuits for their acceleration. ReRAM-based accelerators have gained significant traction due to their ability to leverage in-memory computations. In a crossbar structure, they can perform multiply-and-accumulate operations more efficiently than standard CMOS logic. By virtue of being resistive switches, ReRAM switches can only reliably store one of two states. This is a severe limitation on the range of values in a computational kernel. This paper presents a novel scheme in alleviating the single-bit-per-device restriction by exploiting frequency dependence of v-i plane hysteresis, and assigning kernel information not only to the device conductance but also partially distributing it to the frequency of a time-varying input. We show this approach reduces average power consumption for a single crossbar convolution by up to a factor of x16 for an unsigned 8-bit input image, where each convolutional process consumes a worst-case of 1.1mW, and reduces area by a factor of x8, without reducing accuracy to the level of binarized neural networks. This presents a massive saving in computing cost when there are many simultaneous in-situ multiply-and-accumulate processes occurring across different crossbars.
引用
收藏
页码:267 / 271
页数:5
相关论文
共 50 条
  • [31] A survey on hardware security of DNN models and accelerators
    Mittal, Sparsh
    Gupta, Himanshi
    Srivastava, Srishti
    JOURNAL OF SYSTEMS ARCHITECTURE, 2021, 117
  • [32] A Uniform Modeling Methodology for Benchmarking DNN Accelerators
    Palit, Indranil
    Lou, Qiuwen
    Perricone, Robert
    Niemier, Michael
    Hu, X. Sharon
    2019 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2019,
  • [33] Positive/Negative Approximate Multipliers for DNN Accelerators
    Spantidi, Ourania
    Zervakis, Georgios
    Anagnostopoulos, Iraklis
    Amrouch, Hussain
    Henkel, Joerg
    2021 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN (ICCAD), 2021,
  • [34] AUTOHET: An Automated Heterogeneous ReRAM-Based Accelerator for DNN Inference
    Wu, Tong
    Het, Shuibing
    Zhu, Jianxin
    Chen, Weijian
    Yang, Siling
    Chen, Ping
    Yin, Yanlong
    Zhang, Xuechen
    Sun, Xian-He
    Chen, Gang
    53RD INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2024, 2024, : 1052 - 1061
  • [35] Flipping Bits to Share Crossbars in ReRAM-Based DNN Accelerator
    Zhao, Lei
    Zhang, Youtao
    Yang, Jun
    2021 IEEE 39TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2021), 2021, : 17 - 24
  • [36] Power-based Attacks on Spatial DNN Accelerators
    Li, Ge
    Tiwari, Mohit
    Orshansky, Michael
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2022, 18 (03)
  • [37] SecureLoop: Design Space Exploration of Secure DNN Accelerators
    Lee, Kyungmi
    Yan, Mengjia
    Emer, Joel S.
    Chandrakasan, Anantha P.
    56TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO 2023, 2023, : 194 - 208
  • [38] A Precision-Aware Neuron Engine for DNN Accelerators
    Vishwakarma S.
    Raut G.
    Jaiswal S.
    Vishvakarma S.K.
    Ghai D.
    SN Computer Science, 5 (5)
  • [39] Fault Resilience of DNN Accelerators for Compressed Sensor Inputs
    Arunachalam, Ayush
    Kundu, Shamik
    Raha, Arnab
    Banerjee, Suvadeep
    Basu, Kanad
    2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 329 - 332
  • [40] Shaped Pruning for Efficient Memory Addressing in DNN Accelerators
    Woo, Yunhee
    Kim, Dongyoung
    Jeong, Jaemin
    Lee, Jeong-Gun
    2021 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-ASIA (ICCE-ASIA), 2021,