Bit-Beading: Stringing bit-level MAC results for Accelerating Neural Networks

被引:0
|
作者
Anwar, Zeeshan [1 ]
Longchar, Imlijungla [1 ]
Kapoor, Hemangee K. [1 ]
机构
[1] IIT Guwahati, Dept Comp Sci & Engn, Gauhati, India
关键词
MAC Unit; Reconfigurable Arithmetic; Booth's algorithm; CNN; DNN; Neural Network; Low Precision;
D O I
10.1109/VLSID60093.2024.00042
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
On account of the rising demands of AI applications and the consequent need for improvement, researchers are designing better and faster algorithms and architectures. Convolutional Neural Networks (CNN) are neural networks that have become ubiquitous and find applications in the domain of computer vision. Inference in CNN involves convolution operation, which mainly consists of a massive number of matrix multiplications. Optimising these multiplications will enable faster execution of the inference tasks. Fixed precision during inference takes the same time to compute for both higher and lower precision. It is noted in the literature that lowering the precision to some extent does not affect the inference accuracy. In this paper, we propose a reconfigurable multiplier that can handle the precision of different magnitudes. We design Bit-Bead, a basic unit based on Booth's algorithm, where several bit-beads are composed (i.e., stringed) to form a multiplier of the required precision. The reconfigurable multipliers need low latency due to lower precision and also enable performing multiple low-precision computations. Our proposal shows considerable performance improvement compared to the baseline and existing designs.
引用
收藏
页码:216 / 221
页数:6
相关论文
共 50 条
  • [41] BAT: The bit-level analysis tools - (Tool paper)
    Manolios, Panagiotis
    Srinivasan, Sudarshan K.
    Vroon, Daron
    COMPUTER AIDED VERIFICATION, PROCEEDINGS, 2007, 4590 : 303 - +
  • [42] Continual Learning via Bit-Level Information Preserving
    Shi, Yujun
    Yuan, Li
    Chen, Yunpeng
    Feng, Jiashi
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16669 - 16678
  • [43] AN IMPROVED BIT-LEVEL SYSTOLIC ARCHITECTURE FOR IIR FILTERING
    KNOWLES, SC
    MCWHIRTER, JG
    SYSTOLIC ARRAY PROCESSORS, 1989, : 205 - 214
  • [44] Optimize Dataflow of DNN on Bit-Level Composable Architecture
    Gao, Hanyuan
    Gong, Lei
    Wang, Teng
    Computer Engineering and Applications, 60 (18): : 147 - 157
  • [45] Modeling the bit-level stochastic correlation for turbo decoding
    Lin, Yi-Nan
    Hung, Wei-Wen
    Chen, Tsan-Jieh
    Lu, Erl-Huei
    COMPUTER COMMUNICATIONS, 2006, 29 (18) : 3856 - 3862
  • [46] Bit-level based secret sharing for image encryption
    Lukac, R
    Plataniotis, KN
    PATTERN RECOGNITION, 2005, 38 (05) : 767 - 772
  • [47] DSC and universal bit-level combining for HARQ systems
    Lv, Tiejun
    Xia, Jinhuan
    Long, Feichi
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2011, : 1 - 12
  • [48] Modeling Functional Unit Delays for Bit-Level Chaining
    Shin, Dong-yeob
    Lee, Seokhyun
    Choi, Kiyoung
    ISOCC: 2008 INTERNATIONAL SOC DESIGN CONFERENCE, VOLS 1-3, 2008, : 326 - 329
  • [49] Build a compact binary neural network through bit-level sensitivity and data pruning
    Li, Yixing
    Zhang, Shuai
    Zhou, Xichuan
    Ren, Fengbo
    NEUROCOMPUTING, 2020, 398 : 45 - 54
  • [50] Bit-level image encryption algorithm based on BP neural network and gray code
    Wang, Xingyuan
    Lin, Shujuan
    Li, Yong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (08) : 11655 - 11670