FPGA-based acceleration for binary neural networks in edge computing

被引:1
|
作者
Zhan J.-Y. [1 ]
Yu A.-T. [1 ]
Jiang W. [1 ]
Yang Y.-J. [1 ]
Xie X.-N. [2 ]
Chang Z.-W. [3 ]
Yang J.-H. [4 ]
机构
[1] School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu
[2] School of Automation, Chengdu University of Information Technology, Chengdu
[3] State Grid Sichuan Electric Power Research Institute, Chengdu
[4] Department of Information Sciences and Technology, George Mason University, Fairfax
基金
中国国家自然科学基金;
关键词
Accelerator; Binarization; Field-programmable gate array (FPGA); Neural networks; Quantification;
D O I
10.1016/j.jnlest.2023.100204
中图分类号
学科分类号
摘要
As a core component in intelligent edge computing, deep neural networks (DNNs) will increasingly play a critically important role in addressing the intelligence-related issues in the industry domain, like smart factories and autonomous driving. Due to the requirement for a large amount of storage space and computing resources, DNNs are unfavorable for resource-constrained edge computing devices, especially for mobile terminals with scarce energy supply. Binarization of DNN has become a promising technology to achieve a high performance with low resource consumption in edge computing. Field-programmable gate array (FPGA)-based acceleration can further improve the computation efficiency to several times higher compared with the central processing unit (CPU) and graphics processing unit (GPU). This paper gives a brief overview of binary neural networks (BNNs) and the corresponding hardware accelerator designs on edge computing environments, and analyzes some significant studies in detail. The performances of some methods are evaluated through the experiment results, and the latest binarization technologies and hardware acceleration methods are tracked. We first give the background of designing BNNs and present the typical types of BNNs. The FPGA implementation technologies of BNNs are then reviewed. Detailed comparison with experimental evaluation on typical BNNs and their FPGA implementation is further conducted. Finally, certain interesting directions are also illustrated as future work. © 2023 The Authors
引用
收藏
相关论文
共 50 条
  • [1] FPGA-based acceleration for binary neural networks in edge computing
    JinYu Zhan
    AnTai Yu
    Wei Jiang
    YongJia Yang
    XiaoNa Xie
    ZhengWei Chang
    JunHuan Yang
    Journal of Electronic Science and Technology, 2023, 21 (02) : 67 - 79
  • [2] FPGA-Based Acceleration for Bayesian Convolutional Neural Networks
    Fan, Hongxiang
    Ferianc, Martin
    Que, Zhiqiang
    Liu, Shuanglong
    Niu, Xinyu
    Rodrigues, Miguel R. D.
    Luk, Wayne
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (12) : 5343 - 5356
  • [3] The Case for FPGA-Based Edge Computing
    Xu, Chenren
    Jiang, Shuang
    Luo, Guojie
    Sun, Guangyu
    An, Ning
    Huang, Gang
    Liu, Xuanzhe
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (07) : 2610 - 2619
  • [4] Efficient FPGA-Based Convolutional Neural Network Implementation for Edge Computing
    Cuong, Pham-Quoc
    Thinh, Tran Ngoc
    JOURNAL OF ADVANCES IN INFORMATION TECHNOLOGY, 2023, 14 (03) : 479 - 487
  • [5] Persistent Fault Analysis of Neural Networks on FPGA-based Acceleration System
    Xu, Dawen
    Zhu, Ziyang
    Liu, Cheng
    Wang, Ying
    Li, Huawei
    Zhang, Lei
    Cheng, Kwang-Ting
    2020 IEEE 31ST INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS (ASAP 2020), 2020, : 85 - 92
  • [6] FPGA-based Acceleration of Neural Network Training
    Sang, Ruoyu
    Liu, Qiang
    Zhang, Qijun
    2016 IEEE MTT-S INTERNATIONAL CONFERENCE ON NUMERICAL ELECTROMAGNETIC AND MULTIPHYSICS MODELING AND OPTIMIZATION (NEMO), 2016,
  • [7] FPGA-Based Memristor Emulator Circuit for Binary Convolutional Neural Networks
    Tolba, Mohammed F.
    Halawani, Yasmin
    Saleh, Hani
    Mohammad, Baker
    Al-Qutayri, Mahmoud
    IEEE ACCESS, 2020, 8 : 117736 - 117745
  • [8] InSight: An FPGA-Based Neuromorphic Computing System for Deep Neural Networks
    Hong, Taeyang
    Kang, Yongshin
    Chung, Jaeyong
    JOURNAL OF LOW POWER ELECTRONICS AND APPLICATIONS, 2020, 10 (04) : 1 - 18
  • [9] Hardware Acceleration of Deep Neural Networks for Autonomous Driving on FPGA-based SoC
    Sciangula, Gerlando
    Restuccia, Francesco
    Biondi, Alessandro
    Buttazzo, Giorgio
    2022 25TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD), 2022, : 406 - 414
  • [10] FPGA-based Acceleration of Deep Neural Networks Using High Level Method
    Liu, Lei
    Luo, Jianlu
    Deng, Xiaoyan
    Li, Sikun
    2015 10TH INTERNATIONAL CONFERENCE ON P2P, PARALLEL, GRID, CLOUD AND INTERNET COMPUTING (3PGCIC), 2015, : 824 - 827