Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks

被引:14
|
作者
Barabasz, Barbara [1 ]
Anderson, Andrew [1 ]
Soodhalter, Kirk M. [2 ]
Gregg, David [1 ]
机构
[1] Trinity Coll Dublin, Sch Comp & Stat, Dublin 2, Ireland
[2] Trinity Coll Dublin, Sch Math, Dublin 2, Ireland
来源
基金
爱尔兰科学基金会;
关键词
Floating point error; numerical analysis; Winograd algorithm; Toom-Cook algorithm; convolution; deep neural network; MULTIPLICATION; STABILITY; FAITHFUL;
D O I
10.1145/3412380
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Popular deep neural networks (DNNs) spend the majority of their execution time computing convolutions. The Winograd family of algorithms can greatly reduce the number of arithmetic operations required and is used in many DNN software frameworks. However, the performance gain is at the expense of a reduction in floating point (FP) numerical accuracy. In this article, we analyse the worst-case FP error and derive an estimation of the norm and conditioning of the algorithm. We show that the bound grows exponentially with the size of the convolution. Further, the error bound of the modified algorithm is slightly lower but still exponential. We propose several methods for reducing FP error. We propose a canonical evaluation ordering based on Huffman coding that reduces summation error. We study the selection of sampling "points" experimentally and find empirically good points for the most important sizes. We identify the main factors associated with good points. In addition, we explore other methods to reduce FP error, including mixed-precision convolution, and pairwise summation across DNN channels. Using our methods, we can significantly reduce FP error for a given block size, which allows larger block sizes to be used and reduced computation.
引用
收藏
页数:33
相关论文
共 50 条
  • [1] Winograd Convolution for Deep Neural Networks: Efficient Point Selection
    Alam, Syed Asad
    Anderson, Andrew
    Barabasz, Barbara
    Gregg, David
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (06)
  • [2] Vectorized Winograd's algorithm for Convolution Neural networks
    Zhao, Yuekai
    Lu, Jianzhuang
    Chen, Xiaowen
    19TH IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL AND DISTRIBUTED PROCESSING WITH APPLICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2021), 2021, : 715 - 722
  • [3] Winograd Algorithm for 3D Convolution Neural Networks
    Wang, Zelong
    Lan, Qiang
    He, Hongjun
    Zhang, Chunyuan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 609 - 616
  • [4] Improving the quality of underwater imaging using deep convolution neural networks
    Nagaraj V. Dharwadkar
    Anjali M.Yadav
    Mohammad Ali Kadampur
    Iran Journal of Computer Science, 2022, 5 (2) : 127 - 141
  • [5] Deep green function convolution for improving saliency in convolutional neural networks
    Dominique Beaini
    Sofiane Achiche
    Alexandre Duperré
    Maxime Raison
    The Visual Computer, 2021, 37 : 227 - 244
  • [6] Deep green function convolution for improving saliency in convolutional neural networks
    Beaini, Dominique
    Achiche, Sofiane
    Duperre, Alexandre
    Raison, Maxime
    VISUAL COMPUTER, 2021, 37 (02): : 227 - 244
  • [7] Deep Convolution Neural Networks for Twitter Sentiment Analysis
    Zhao Jianqiang
    Gui Xiaolin
    Zhang Xuejun
    IEEE ACCESS, 2018, 6 : 23253 - 23260
  • [8] Sentiment Analysis of Text using Deep Convolution Neural Networks
    Chachra, Anmol
    Mehndiratta, Pulkit
    Gupta, Mohit
    2017 TENTH INTERNATIONAL CONFERENCE ON CONTEMPORARY COMPUTING (IC3), 2017, : 247 - 252
  • [9] Full error analysis for the training of deep neural networks
    Beck, Christian
    Jentzen, Arnulf
    Kuckuck, Benno
    INFINITE DIMENSIONAL ANALYSIS QUANTUM PROBABILITY AND RELATED TOPICS, 2022, 25 (02)
  • [10] An Architecture to Accelerate Convolution in Deep Neural Networks
    Ardakani, Arash
    Condo, Carlo
    Ahmadi, Mehdi
    Gross, Warren J.
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2018, 65 (04) : 1349 - 1362