Optimal Architecture of Floating-Point Arithmetic for Neural Network Training Processors

被引:11
|
作者
Junaid, Muhammad [1 ]
Arslan, Saad [2 ]
Lee, TaeGeon [1 ]
Kim, HyungWon [1 ]
机构
[1] Chungbuk Natl Univ, Coll Elect & Comp Engn, Dept Elect, Cheongju 28644, South Korea
[2] COMSATS Univ Islamabad, Dept Elect & Comp Engn, Pk Rd, Islamabad 45550, Pakistan
基金
新加坡国家研究基金会;
关键词
floating-points; IEEE; 754; convolutional neural network (CNN); MNIST dataset; ACCELERATOR;
D O I
10.3390/s22031230
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT demanded low-power neural network processors, most of the recent research has been focused on accelerator designs only for inference. The growing interest in self-supervised and semi-supervised learning now calls for processors offloading the training process in addition to the inference process. Incorporating training with high accuracy goals requires the use of floating-point operators. The higher precision floating-point arithmetic architectures in neural networks tend to consume a large area and energy. Consequently, an energy-efficient/compact accelerator is required. The proposed architecture incorporates training in 32 bits, 24 bits, 16 bits, and mixed precisions to find the optimal floating-point format for low power and smaller-sized edge device. The proposed accelerator engines have been verified on FPGA for both inference and training of the MNIST image dataset. The combination of 24-bit custom FP format with 16-bit Brain FP has achieved an accuracy of more than 93%. ASIC implementation of this optimized mixed-precision accelerator using TSMC 65nm reveals an active area of 1.036 x 1.036 mm(2) and energy consumption of 4.445 mu J per training of one image. Compared with 32-bit architecture, the size and the energy are reduced by 4.7 and 3.91 times, respectively. Therefore, the CNN structure using floating-point numbers with an optimized data path will significantly contribute to developing the AIoT field that requires a small area, low energy, and high accuracy.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Accurate and Reliable Computing in Floating-Point Arithmetic
    Rump, Siegfried M.
    MATHEMATICAL SOFTWARE - ICMS 2010, 2010, 6327 : 105 - 108
  • [42] A PROPOSED STANDARD FOR BINARY FLOATING-POINT ARITHMETIC
    STEVENSON, D
    COMPUTER, 1981, 14 (03) : 51 - 62
  • [43] ANALYSIS OF ROUNDING METHODS IN FLOATING-POINT ARITHMETIC
    KUCK, DJ
    PARKER, DS
    SAMEH, AH
    IEEE TRANSACTIONS ON COMPUTERS, 1977, 26 (07) : 643 - 650
  • [44] Computing integer powers in floating-point arithmetic
    Kornerup, Peter
    Lefevre, Vincent
    Muller, Jean-Michel
    CONFERENCE RECORD OF THE FORTY-FIRST ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, VOLS 1-5, 2007, : 343 - +
  • [45] DSP TACKLES FLOATING-POINT ARITHMETIC.
    Ferro, Frank
    Electronic Systems Technology and Design/Computer Design's, 1986, 25 (15): : 53 - 56
  • [46] SIMULATING LOW PRECISION FLOATING-POINT ARITHMETIC
    Higham, Nicholas J.
    Pranesh, Srikara
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2019, 41 (05): : C585 - C602
  • [47] Double precision floating-point arithmetic on FPGAs
    Paschalakis, S
    Lee, P
    2003 IEEE INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (FPT), PROCEEDINGS, 2003, : 352 - 358
  • [48] Accurate Complex Multiplication in Floating-Point Arithmetic
    Lefevre, Vincent
    Muller, Jean-Michel
    2019 IEEE 26TH SYMPOSIUM ON COMPUTER ARITHMETIC (ARITH), 2019, : 23 - 29
  • [49] Secure multiparty computations in floating-point arithmetic
    Guo, Chuan
    Hannun, Awni
    Knott, Brian
    van der Maaten, Laurens
    Tygert, Mark
    Zhu, Ruiyu
    INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2022, 11 (01) : 103 - 135
  • [50] Algorithms for Manipulating Quaternions in Floating-Point Arithmetic
    Joldes, Mioara
    Muller, Jean-Michel
    2020 IEEE 27TH SYMPOSIUM ON COMPUTER ARITHMETIC (ARITH), 2020, : 48 - 55