FBNA: A Fully Binarized Neural Network Accelerator

被引:58
|
作者
Guo, Peng [1 ,2 ]
Ma, Hong [1 ]
Chen, Ruizhi [1 ,2 ]
Li, Pin [1 ]
Xie, Shaolin [1 ]
Wang, Donglin [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Comp & Control Engn, Beijing, Peoples R China
来源
2018 28TH INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE LOGIC AND APPLICATIONS (FPL) | 2018年
关键词
CNN; BNN; FPGA; Accelerator;
D O I
10.1109/FPL.2018.00016
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent researches, binarized neural network (BNN) has been proposed to address the massive computations and large memory footprint problem of the convolutional neural network (CNN). Several works have designed specific BNN accelerators and showed very promising results. Nevertheless, only part of the neural network is binarized in their architecture and the benefits of binary operations were not fully exploited. In this work, we propose the first fully binarized convolutional neural network accelerator (FBNA) architecture, in which all convolutional operations are binarized and unified, even including the first layer and padding. The fully unified architecture provides more resource, parallelism and scalability optimization opportunities. Compared with the state-of-the-art BNN accelerator, our evaluation results show 3.1x performance, 5.4x resource efficiency and 4.9x power efficiency on CIFAR-10.
引用
收藏
页码:51 / 54
页数:4
相关论文
共 50 条
  • [1] BiNMAC: Binarized neural Network Manycore ACcelerator
    Jafari, Ali
    Hosseini, Morteza
    Kulkarni, Adwaya
    Patel, Chintan
    Mohsenin, Tinoosh
    PROCEEDINGS OF THE 2018 GREAT LAKES SYMPOSIUM ON VLSI (GLSVLSI'18), 2018, : 443 - 446
  • [2] A Fully Connected Layer Elimination for a Binarized Convolutional Neural Network on an FPGA
    Nakahara, Hiroki
    Fujii, Tomoya
    Sato, Shimpei
    2017 27TH INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE LOGIC AND APPLICATIONS (FPL), 2017,
  • [3] Fully Binarized Convolutional Neural Network for Accelerating Edge Vision Computing
    Jiang, Peiqing
    Wu, Lijun
    Chen, Zhicong
    Lai, Yunfeng
    Cheng, Shuying
    Lin, Peijie
    2018 INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, BIG DATA AND BLOCKCHAIN (ICCBB 2018), 2018, : 164 - 169
  • [4] A High-Efficiency FPGA-Based Accelerator for Binarized Neural Network
    Guo, Peng
    Ma, Hong
    Chen, Ruizhi
    Wang, Donglin
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2019, 28
  • [5] On-Sensor Binarized Fully Convolutional Neural Network for Localisation and Coarse Segmentation
    Liu, Yanan
    Lu, Yao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3628 - 3637
  • [6] Binarized graph neural network
    Hanchen Wang
    Defu Lian
    Ying Zhang
    Lu Qin
    Xiangjian He
    Yiguang Lin
    Xuemin Lin
    World Wide Web, 2021, 24 : 825 - 848
  • [7] Binarized graph neural network
    Wang, Hanchen
    Lian, Defu
    Zhang, Ying
    Qin, Lu
    He, Xiangjian
    Lin, Yiguang
    Lin, Xuemin
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2021, 24 (03): : 825 - 848
  • [8] A Fully Onchip Binarized Convolutional Neural Network FPGA mpe mentation with Accurate Inference
    Yang, Li
    He, Zhezhi
    Fan, Deliang
    PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED '18), 2018, : 285 - 290
  • [9] Reconfigurable and hardware efficient adaptive quantization model-based accelerator for binarized neural network
    A, Sasikumar
    Ravi, Logesh
    Kotecha, Ketan
    V, Indragandhi
    V, Subramaniyaswamy
    Computers and Electrical Engineering, 2022, 102
  • [10] Reconfigurable and hardware efficient adaptive quantization model-based accelerator for binarized neural network
    Sasikumar, A.
    Ravi, Logesh
    Kotecha, Ketan
    Indragandhi, V
    Subramaniyaswamy, V
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 102