FBNA: A Fully Binarized Neural Network Accelerator

被引:58
|
作者
Guo, Peng [1 ,2 ]
Ma, Hong [1 ]
Chen, Ruizhi [1 ,2 ]
Li, Pin [1 ]
Xie, Shaolin [1 ]
Wang, Donglin [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Comp & Control Engn, Beijing, Peoples R China
来源
2018 28TH INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE LOGIC AND APPLICATIONS (FPL) | 2018年
关键词
CNN; BNN; FPGA; Accelerator;
D O I
10.1109/FPL.2018.00016
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent researches, binarized neural network (BNN) has been proposed to address the massive computations and large memory footprint problem of the convolutional neural network (CNN). Several works have designed specific BNN accelerators and showed very promising results. Nevertheless, only part of the neural network is binarized in their architecture and the benefits of binary operations were not fully exploited. In this work, we propose the first fully binarized convolutional neural network accelerator (FBNA) architecture, in which all convolutional operations are binarized and unified, even including the first layer and padding. The fully unified architecture provides more resource, parallelism and scalability optimization opportunities. Compared with the state-of-the-art BNN accelerator, our evaluation results show 3.1x performance, 5.4x resource efficiency and 4.9x power efficiency on CIFAR-10.
引用
收藏
页码:51 / 54
页数:4
相关论文
共 50 条
  • [31] FP-BNN: Binarized neural network on FPGA
    Liang, Shuang
    Yin, Shouyi
    Liu, Leibo
    Luk, Wayne
    Wei, Shaojun
    NEUROCOMPUTING, 2018, 275 : 1072 - 1086
  • [32] Weight Compression-Friendly Binarized Neural Network
    Jiao, Yuzhong
    Huo, Xiao
    Lei, Yuan
    Li, Sha
    Li, Yiu Kei
    2020 IEEE GLOBAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND INTERNET OF THINGS (GCAIOT), 2020, : 117 - 122
  • [33] Binarized Attributed Network Embedding via Neural Networks
    Xia, Hangyu
    Gao, Neng
    Peng, Jia
    Mo, Jingjie
    Wang, Jiong
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [34] Efficient SIMD Implementation of Binarized Convolutional Neural Network
    Park, Yongmin
    Kim, Seongchan
    Kim, Tae-Hwan
    2018 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2018,
  • [35] A Novel Convolutional Neural Network Accelerator That Enables Fully-pipelined Execution of Layers
    Kang, Donghyun
    Kang, Jintaek
    Kwon, Hyungdal
    Park, Hyunsik
    Ha, Soonhoi
    2019 IEEE 37TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2019), 2019, : 698 - 701
  • [36] A Fully-Parallel Reconfigurable Spiking Neural Network Accelerator with Structured Sparse Connections
    Li, Mingyang
    Kan, Yirong
    Zhang, Renyuan
    Nakashima, Yasuhiko
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [37] Binarized Neural Network Accelerator Macro Using Ultralow-Voltage Retention SRAM for Energy Minimum-Point Operation
    Shiotsu, Yusaku
    Sugahara, Satoshi
    IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS, 2022, 8 (02): : 134 - 144
  • [38] LightBulb: A Photonic-Nonvolatile-Memory-based Accelerator for Binarized Convolutional Neural Networks
    Zokaee, Farzaneh
    Lou, Qian
    Youngblood, Nathan
    Liu, Weichen
    Xie, Yiyuan
    Jiang, Lei
    PROCEEDINGS OF THE 2020 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2020), 2020, : 1438 - 1443
  • [39] Variation-Tolerant Capacitive Array for Binarized Neural Network
    Kim, Hyeongsu
    Woo, Sung Yun
    Lee, Soochang
    Seo, Young-Tak
    Park, Byung-Gook
    Lee, Jong-Ho
    IEEE ELECTRON DEVICE LETTERS, 2022, 43 (03) : 478 - 481
  • [40] Binarized Depthwise Separable Neural Network for Object Tracking in FPGA
    Yang, Li
    He, Zhezhi
    Fan, Deliang
    GLSVLSI '19 - PROCEEDINGS OF THE 2019 ON GREAT LAKES SYMPOSIUM ON VLSI, 2019, : 347 - 350