Memristive-based Mixed-signal CGRA for Accelerating Deep Neural Network Inference

被引:2
|
作者
Kazerooni-Zand, Reza [1 ]
Kamal, Mehdi [2 ]
Afzali-Kusha, Ali [1 ,3 ]
Pedram, Massoud [2 ]
机构
[1] Univ Tehran, Sch Elect & Comp Engn, Coll Engn, North Karegar St, Tehran 1439957131, Iran
[2] Univ Southern Calif, Elect & Comp Engn Dept, 3740 McClintock Ave, Los Angeles, CA USA
[3] Inst Res Fundamental Sci IPM, Sch Comp Sci, Lavasani St, Tehran 1953833511, Iran
关键词
Coarse-grained reconfigurable architecture; accelerator; memristor; Convolutional Neural Network; RELIABILITY IMPROVEMENT; ENERGY; OPTIMIZATION;
D O I
10.1145/3595638
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, a mixed-signal coarse-grained reconfigurable architecture (CGRA) for accelerating inference in deep neural networks (DNNs) is presented. It is based on performing dot-product computations using analog computing to achieve a considerable speed improvement. Other computations are performed digitally. In the proposed structure (called MX-CGRA), analog tiles consisting of memristor crossbars are employed. To reduce the overhead of converting the data between analog and digital domains, we utilize a proper interface between the analog and digital tiles. In addition, the structure benefits from an efficient memory hierarchy where the data is moved as close as possible to the computing fabric. Moreover, to fully utilize the tiles, we define a set of micro instructions to configure the analog and digital domains. Corresponding context words used in the CGRA are determined by these instructions (generated by a companion compiler tool). The efficacy of the MX-CGRA is assessed by modeling the execution of state-of-the-art DNN architectures on this structure. The architectures are used to classify images of the ImageNet dataset. Simulation results show that, compared to the previous mixed-signal DNN accelerators, on average, a higher throughput of 2.35 x is achieved.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Mixed-Signal Computing for Deep Neural Network Inference
    Murmann, Boris
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2021, 29 (01) : 3 - 13
  • [2] A CGRA based Neural Network Inference Engine for Deep Reinforcement Learning
    Liang, Minglan
    Chen, Mingsong
    Wang, Zheng
    Sun, Jingwei
    2018 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS (APCCAS 2018), 2018, : 540 - 543
  • [3] A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference
    Le Gallo, Manuel
    Khaddam-Aljameh, Riduan
    Stanisavljevic, Milos
    Vasilopoulos, Athanasios
    Kersting, Benedikt
    Dazzi, Martino
    Karunaratne, Geethan
    Brandli, Matthias
    Singh, Abhairaj
    Mueller, Silvia M.
    Buchel, Julian
    Timoneda, Xavier
    Joshi, Vinay
    Rasch, Malte J.
    Egger, Urs
    Garofalo, Angelo
    Petropoulos, Anastasios
    Antonakopoulos, Theodore
    Brew, Kevin
    Choi, Samuel
    Ok, Injo
    Philip, Timothy
    Chan, Victor
    Silvestre, Claire
    Ahsan, Ishtiaq
    Saulnier, Nicole
    Narayanan, Vijay
    Francese, Pier Andrea
    Eleftheriou, Evangelos
    Sebastian, Abu
    NATURE ELECTRONICS, 2023, 6 (09) : 680 - +
  • [4] A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference
    Manuel Le Gallo
    Riduan Khaddam-Aljameh
    Milos Stanisavljevic
    Athanasios Vasilopoulos
    Benedikt Kersting
    Martino Dazzi
    Geethan Karunaratne
    Matthias Brändli
    Abhairaj Singh
    Silvia M. Müller
    Julian Büchel
    Xavier Timoneda
    Vinay Joshi
    Malte J. Rasch
    Urs Egger
    Angelo Garofalo
    Anastasios Petropoulos
    Theodore Antonakopoulos
    Kevin Brew
    Samuel Choi
    Injo Ok
    Timothy Philip
    Victor Chan
    Claire Silvestre
    Ishtiaq Ahsan
    Nicole Saulnier
    Vijay Narayanan
    Pier Andrea Francese
    Evangelos Eleftheriou
    Abu Sebastian
    Nature Electronics, 2023, 6 : 680 - 693
  • [5] A Modular Mixed-Signal CVNS Neural Network Architecture
    Saffar, Farinoush
    Mirhassani, Mitra
    Ahmadi, Majid
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [6] Analog/Mixed-Signal Hardware Error Modeling for Deep Learning Inference
    Rekhi, Angad S.
    Zimmer, Brian
    Nedovic, Nikola
    Liu, Ningxi
    Venkatesan, Rangharajan
    Wang, Miaorong
    Khailany, Brucek
    Dally, William J.
    Gray, C. Thomas
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [7] Mixed-signal VLSI neural network based on Continuous Valued Number System
    Zamanlooy, Babak
    Mirhassani, Mitra
    NEUROCOMPUTING, 2017, 221 : 15 - 23
  • [8] A Mixed-Signal Approach to Memristive Neuromorphic System Design
    Chakma, Gangotree
    Sayyaparaju, Sagarvarma
    Weiss, Ryan
    Rose, Garrett S.
    2017 IEEE 60TH INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2017, : 547 - 550
  • [9] A Mixed-Signal Spiking Neuromorphic Architecture for Scalable Neural Network
    Luo, Chong
    Ying, Zhaozhong
    Zhu, Xiaolei
    Chen, Longlong
    2017 NINTH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC 2017), VOL 1, 2017, : 179 - 182
  • [10] A mixed-signal VLSI neural network with on-chip learning
    Mirhassani, M
    Ahmadi, M
    Miller, WC
    CCECE 2003: CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, VOLS 1-3, PROCEEDINGS: TOWARD A CARING AND HUMANE TECHNOLOGY, 2003, : 591 - 594