Hartley Stochastic Computing For Convolutional Neural Networks

被引:0
|
作者
Mozafari, S. H. [1 ]
Clark, J. J. [1 ]
Gross, W. J. [1 ]
Meyer, B. H. [1 ]
机构
[1] McGill Univ, Dept Elect & Comp Engn, Montreal, PQ, Canada
关键词
TRANSFORM;
D O I
10.1109/SiPS52927.2021.00049
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Energy consumption and the latency of convolutional neural networks (CNNs) are two important factors that limit their applications specifically for embedded devices. Fourier-based frequency domain (FD) convolution is a promising low-cost alternative to conventional implementations in the spatial domain (SD) for CNNs. FD convolution performs its operation with point-wise multiplications. However, in CNNs, the overhead for the Fourier-based FD-convolution surpasses its computational saving for small filter sizes. In this work, we propose to implement convolutional layers in the FD using the Hartley transformation (HT) instead of the Fourier transformation. We show that the HT can reduce the convolution delay and energy consumption even for small filters. With the HT of parameters, we replace convolution with point-wise multiplications. HT lets us compress input feature maps, in all convolutional layer, before convolving them with filters. To optimize the hardware implementation of our method, we utilize stochastic computing (SC) to perform the point-wise multiplications in the FD. In this regard, we re-formalize the HT to better match with SC. We show that, compared to conventional Fourier-based convolution, Hartley SC-based convolution can achieve 1.33x speedup, and 1.23x energy saving on a Virtex 7 FPGA when we implement AlexNet over CIFAR-10.
引用
收藏
页码:235 / 240
页数:6
相关论文
共 50 条
  • [1] Implementing Convolutional Neural Networks Using Hartley Stochastic Computing With Adaptive Rate Feature Map Compression
    Mozafari, S. H.
    Clark, J. J.
    Gross, W. J.
    Meyer, B. H.
    IEEE OPEN JOURNAL OF CIRCUITS AND SYSTEMS, 2021, 2 : 805 - 819
  • [2] Scalable Stochastic-Computing Accelerator for Convolutional Neural Networks
    Sim, Hyeonuk
    Dong Nguyen
    Lee, Jongeun
    Choi, Kiyoung
    2017 22ND ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2017, : 696 - 701
  • [3] Accurate and Efficient Stochastic Computing Hardware for Convolutional Neural Networks
    Yu, Joonsang
    Kim, Kyounghoon
    Lee, Jongeun
    Choi, Kiyoung
    2017 IEEE 35TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2017, : 105 - 112
  • [4] Towards Acceleration of Deep Convolutional Neural Networks using Stochastic Computing
    Li, Ji
    Ren, Ao
    Li, Zhe
    Ding, Caiwen
    Yuan, Bo
    Qiu, Qinru
    Wang, Yanzhi
    2017 22ND ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2017, : 115 - 120
  • [5] Optimizing Stochastic Computing for Low Latency Inference of Convolutional Neural Networks
    Chen, Zhiyuan
    Ma, Yufei
    Wang, Zhongfeng
    2020 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED-DESIGN (ICCAD), 2020,
  • [6] SkippyNN: An Embedded Stochastic-Computing Accelerator for Convolutional Neural Networks
    Hojabr, Reza
    Givaki, Kamyar
    Tayaranian, S. M. Reza
    Esfahanian, Parsa
    Khonsari, Ahmad
    Rahmati, Dara
    Najafi, M. Hassan
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [7] A New Stochastic Computing Multiplier with Application to Deep Convolutional Neural Networks
    Sim, Hyeonuk
    Lee, Jongeun
    PROCEEDINGS OF THE 2017 54TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2017,
  • [8] Fully Parallel Stochastic Computing Hardware Implementation of Convolutional Neural Networks for Edge Computing Applications
    Frasser, Christiam F.
    Linares-Serrano, Pablo
    de los Rios, Ivan Diez
    Moran, Alejandro
    Skibinsky-Gitlin, Erik S.
    Font-Rossello, Joan
    Canals, Vincent
    Roca, Miquel
    Serrano-Gotarredona, Teresa
    Rossello, Josep L.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 10408 - 10418
  • [9] Normalization and dropout for stochastic computing-based deep convolutional neural networks
    Li, Ji
    Yuan, Zihao
    Li, Zhe
    Ren, Ao
    Ding, Caiwen
    Draper, Jeffrey
    Nazarian, Shahin
    Qiu, Qinru
    Yuan, Bo
    Wang, Yanzhi
    INTEGRATION-THE VLSI JOURNAL, 2019, 65 : 395 - 403
  • [10] Structural Design Optimization for Deep Convolutional Neural Networks using Stochastic Computing
    Li, Zhe
    Ren, Ao
    Li, Ji
    Qiu, Qinru
    Yuan, Bo
    Draper, Jeffrey
    Wang, Yanzhi
    PROCEEDINGS OF THE 2017 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2017, : 250 - 253