FBNA: A Fully Binarized Neural Network Accelerator

被引:58
|
作者
Guo, Peng [1 ,2 ]
Ma, Hong [1 ]
Chen, Ruizhi [1 ,2 ]
Li, Pin [1 ]
Xie, Shaolin [1 ]
Wang, Donglin [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Comp & Control Engn, Beijing, Peoples R China
来源
2018 28TH INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE LOGIC AND APPLICATIONS (FPL) | 2018年
关键词
CNN; BNN; FPGA; Accelerator;
D O I
10.1109/FPL.2018.00016
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent researches, binarized neural network (BNN) has been proposed to address the massive computations and large memory footprint problem of the convolutional neural network (CNN). Several works have designed specific BNN accelerators and showed very promising results. Nevertheless, only part of the neural network is binarized in their architecture and the benefits of binary operations were not fully exploited. In this work, we propose the first fully binarized convolutional neural network accelerator (FBNA) architecture, in which all convolutional operations are binarized and unified, even including the first layer and padding. The fully unified architecture provides more resource, parallelism and scalability optimization opportunities. Compared with the state-of-the-art BNN accelerator, our evaluation results show 3.1x performance, 5.4x resource efficiency and 4.9x power efficiency on CIFAR-10.
引用
收藏
页码:51 / 54
页数:4
相关论文
共 50 条
  • [41] A Threshold Neuron Pruning for a Binarized Deep Neural Network on an FPGA
    Fujii, Tomoya
    Sato, Shimpei
    Nakahara, Hiroki
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2018, E101D (02): : 376 - 386
  • [42] A Convolutional Result Sharing Approach for Binarized Neural Network Inference
    Chang, Ya-Chun
    Lin, Chia-Chun
    Lin, Yi-Ting
    Chen, Yung-Chih
    Wang, Chun-Yao
    PROCEEDINGS OF THE 2020 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2020), 2020, : 780 - 785
  • [43] All Binarized Convolutional Neural Network and Its implementation on an FPGA
    Shimoda, Masayuki
    Sato, Shimpei
    Nakahara, Hiroki
    2017 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY (ICFPT), 2017, : 291 - 294
  • [44] FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
    Umuroglu, Yaman
    Fraser, Nicholas J.
    Gambardella, Giulio
    Blott, Michaela
    Leong, Philip
    Jahre, Magnus
    Vissers, Kees
    FPGA'17: PROCEEDINGS OF THE 2017 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS, 2017, : 65 - 74
  • [45] Binarized Neural-Network Parallel-Processing Accelerator Macro Designed for an Energy Efficiency Higher Than 100 TOPS/W
    Shiotsu, Yusaku
    Sugahara, Satoshi
    IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS, 2025, 11 : 25 - 33
  • [46] A Binarized Neural Network Approach to Accelerate in-Vehicle Network Intrusion Detection
    Zhang, Linxi
    Yan, Xuke
    Ma, Di
    IEEE ACCESS, 2022, 10 : 123505 - 123520
  • [47] A Mixed-Signal Binarized Convolutional-Neural-Network Accelerator Integrating Dense Weight Storage and Multiplication for Reduced Data Movement
    Valavi, Hossein
    Ramadge, Peter J.
    Nestler, Eric
    Verma, Naveen
    2018 IEEE SYMPOSIUM ON VLSI CIRCUITS, 2018, : 141 - 142
  • [48] Comparison of training methods for the binarized neural object detection network
    Kim, Sungjei
    Kim, Seong-heum
    Hwang, Youngbae
    Jeong, Jinwoo
    2019 34TH INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS AND COMMUNICATIONS (ITC-CSCC 2019), 2019, : 346 - 348
  • [49] Binarized Neural Networks
    Hubara, Itay
    Courbariaux, Matthieu
    Soudry, Daniel
    El-Yaniv, Ran
    Bengio, Yoshua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [50] Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization
    Helwegen, Koen
    Widdicombe, James
    Geiger, Lukas
    Liu, Zechun
    Cheng, Kwang-Ting
    Nusselder, Roeland
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32