Accelerating Deep Learning by Binarized Hardware

被引:0
|
作者
Takamaeda-Yamazaki, Shinya [1 ]
Ueyoshi, Kodai [1 ]
Ando, Kota [1 ]
Uematsu, Ryota [1 ]
Hirose, Kazutoshi [1 ]
Ikebe, Masayuki [1 ]
Asai, Tetsuya [1 ]
Motomura, Masato [1 ]
机构
[1] Hokkaido Univ, Sapporo, Hokkaido, Japan
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Hardware oriented approaches to accelerate deep neural network processing are very important for various embedded intelligent applications. This paper is a summary of our recent achievements for efficient neural network processing. We focus on the binarization approach for energy- and area-efficient neural network processor. We first present an energy-efficient binarized processor for deep neural networks by employing in-memory processing architecture. The real processor LSI achieves high performance and energy-efficiency compared to prior works. We then present an architecture exploration technique for binarized neural network processor on an FPGA. The exploration result indicates that the binarized hardware achieves very high performance by exploiting multiple different parallelisms at the same time.
引用
收藏
页码:1045 / 1051
页数:7
相关论文
共 50 条
  • [1] Toward Accelerating Deep Learning at Scale Using Specialized Hardware in the Datacenter
    Ovtcharov, Kalin
    Ruwase, Olatunji
    Kim, Joo-Young
    Fowers, Jeremy
    Strauss, Karin
    Chung, Eric S.
    2015 IEEE HOT CHIPS 27 SYMPOSIUM (HCS), 2016,
  • [2] Accelerating High-Resolution Weather Models with Deep-Learning Hardware
    Hatfield, Sam
    Chantry, Matthew
    Duben, Peter
    Palmer, Tim
    PROCEEDINGS OF THE PLATFORM FOR ADVANCED SCIENTIFIC COMPUTING CONFERENCE (PASC '19), 2019,
  • [3] FPGA Implementation of Binarized Perceptron Learning Hardware Using CMOS Invertible Logic
    Shin, Duckgyu
    Onizawa, Naoya
    Hanyu, Takahiro
    2019 26TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (ICECS), 2019, : 115 - 116
  • [4] Accelerating deep learning with memcomputing
    Manukian, Haik
    Traversa, Fabio L.
    Di Ventra, Massimiliano
    NEURAL NETWORKS, 2019, 110 : 1 - 7
  • [5] Accelerating deep learning with precision
    Owain Vaughan
    Nature Electronics, 2022, 5 : 411 - 411
  • [6] Accelerating deep learning with precision
    Vaughan, Owain
    NATURE ELECTRONICS, 2022, 5 (07) : 411 - 411
  • [7] Leveraging Stochasticity for In Situ Learning in Binarized Deep Neural Networks
    Pyle, Steven D.
    Sapp, Justin D.
    DeMara, Ronald F.
    COMPUTER, 2019, 52 (05) : 30 - 39
  • [8] IMC: Energy-Efficient In-Memory Convolver for Accelerating Binarized Deep Neural Network
    Angizi, Shaahin
    Fan, Deliang
    PROCEEDINGS OF NEUROMORPHIC COMPUTING SYMPOSIUM (NCS 2017), 2017,
  • [9] Accelerating Deep Learning Inference in Constrained Embedded Devices Using Hardware Loops and a Dot Product Unit
    Vreca, Jure
    Sturm, Karl J. X.
    Gungl, Ernest
    Merchant, Farhad
    Bientinesi, Paolo
    Leupers, Rainer
    Brezocnik, Zmago
    IEEE ACCESS, 2020, 8 : 165913 - 165926
  • [10] Hardware for Deep Learning Acceleration
    Song, Choongseok
    Ye, Changmin
    Sim, Yonguk
    Jeong, Doo Seok
    ADVANCED INTELLIGENT SYSTEMS, 2024, 6 (10)