EMPIRICAL ANALYSIS OF IEEE754, FIXED-POINT AND POSIT IN LOW PRECISION MACHINE LEARNING

被引:0
|
作者
Ciocirlan, Stefan-Dan [1 ]
Neacs, Teodor-Andrei [1 ]
Rughinis, Razvan-Victor [1 ]
机构
[1] Univ Politehn Bucuresti, Dept Comp Sci, Bucharest, Romania
关键词
Number representation systems; IEEE754; Machine Learning; Knowledge Distillation; Posit;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep neural networks have changed the current algorithms' results in applications such as object classification, image segmentation or nat-ural language processing. To increase their accuracy, they became more complex and more costly in terms of storage, computation time and en-ergy consumption. This paper attacks the problem of storage and presents the advantages of using different number representations as fixed-point and posit numbers for deep neural network inference. The deep neural networks were trained using the proposed framework Low Precision Machine Learn-ing (LPML) with 32-bit IEEE754. The storage was first optimized by the usage of knowledge distillation and then by modifying layer by layer the number representation together with the precision. The first significant re-sults were made by modifying the number representation of the network but keeping the same precision per layer. For a 2-layer network (2LayerNet) using 16-bit Posit, the accuracy is 93.45%, close to 93.47%, the accuracy for using 32-bit IEEE754. Using the 8-bit Posit decreases the accuracy by 1.29%, but at the same time, it reduces the storage space by 75%. The usage of fixed point representation showed a small tolerance in the number of bits used for the fractional part. Using a 4-4 bit fixed point (4 bits for the integer part and 4 bits for the fractional part) reduces the storage used by 75% but decreases accuracy as low as 67.21%. When at least 8 bits are used for the fractional part, the results are similar to the 32-bit IEEE754. To increase accuracy before reducing precision, knowledge distillation was used. A ResNet18 network gained an 0.87% increase in accuracy by using a ResNet34 as a professor. By changing the number representation sys-tem and precision per layer, the storage was reduced by 43.47%, and the accuracy decreased by 0.26%. In conclusion, with the usage of knowledge distillation and change of number representation and precision per layer, the Resnet18 network had 66.75% smaller storage space than the ResNet34 professor network by losing only 1.38% in accuracy.
引用
收藏
页码:13 / 24
页数:12
相关论文
共 26 条
  • [1] EMPIRICAL ANALYSIS OF IEEE754, FIXED-POINT AND POSIT IN LOW PRECISION MACHINE LEARNING
    Ciocîrlan, Ștefan-Dan
    Neacșu, Teodor-Andrei
    Rughiniș, Răzvan-Victor
    UPB Scientific Bulletin, Series C: Electrical Engineering and Computer Science, 2023, 85 (03): : 13 - 24
  • [2] Empirical analysis of IEEE 754 and Posit in statistical methods
    Ciocirlan, Stefan-Dan
    Ebru, Resul
    Rughinis, Razvan-Victor
    Vrejoiu, Mihnea
    CONTROL ENGINEERING AND APPLIED INFORMATICS, 2023, 25 (01): : 59 - 68
  • [3] Deep Learning Inference on Embedded Devices: Fixed-Point vs Posit
    Langroudi, Seyed H. F.
    Pandit, Tej
    Kudithipudi, Dhireesha
    2018 1ST WORKSHOP ON ENERGY EFFICIENT MACHINE LEARNING AND COGNITIVE COMPUTING FOR EMBEDDED APPLICATIONS (EMC2), 2018, : 19 - 23
  • [4] Implementation of Machine Learning Applications on a Fixed-Point DSP
    Bharati, Swetha K.
    Jhunjhunwala, Ashok
    2015 IEEE 28TH CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2015, : 1458 - 1463
  • [5] Fixed-Point Evaluation of Extreme Learning Machine for Classification
    Xu, Yingnan
    Jiang, Jingfei
    Jiang, Juping
    Liu, Zhiqiang
    Xu, Jinwei
    PROCEEDINGS OF ELM-2015, VOL 1: THEORY, ALGORITHMS AND APPLICATIONS (I), 2016, 6 : 27 - 38
  • [6] Training Neural Networks with Low Precision Dynamic Fixed-Point
    Jo, Sujeong
    Park, Hanmin
    Lee, Gunhee
    Choi, Kiyoung
    2018 IEEE 36TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2018, : 405 - 408
  • [7] Evaluation of Static Analysis Techniques for Fixed-Point Precision Optimization
    Cong, Jason
    Gururaj, Karthik
    Liu, Bin
    Liu, Chunyue
    Zhang, Zhiru
    Zhou, Sheng
    Zou, Yi
    PROCEEDINGS OF THE 2009 17TH IEEE SYMPOSIUM ON FIELD PROGRAMMABLE CUSTOM COMPUTING MACHINES, 2009, : 231 - +
  • [8] Empirical Analysis of Fixed Point Precision Quantization of CNNs
    Ansari, Anaam
    Ogunfunmi, Tokunbo
    2019 IEEE 62ND INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2019, : 243 - 246
  • [9] Analysis of Precision for Scaling the Intermediate Variables in Fixed-Point Arithmetic Circuits
    Sarbishei, O.
    Radecka, K.
    2010 IEEE AND ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2010, : 739 - 745
  • [10] Analysis of Range and Precision for Fixed-Point Linear Arithmetic Circuits with Feedbacks
    Sarbishei, O.
    Pang, Y.
    Radecka, K.
    2010 IEEE INTERNATIONAL HIGH LEVEL DESIGN VALIDATION AND TEST WORKSHOP (HLDVT), 2010, : 25 - 32