Binary convolutional neural network: Review

被引:0
|
作者
Ding W. [1 ]
Liu C. [2 ]
Li Y. [2 ]
Zhang B. [3 ]
机构
[1] Unmanned System Research Institute, Beihang University, Beijing
[2] School of Electronic and Information Engineering, Beihang University, Beijing
[3] School of Automation Science and Electrical Engineering, Beihang University, Beijing
基金
中国国家自然科学基金;
关键词
Binarization; Binary convolutional neural networks; Deep learning; Full-precision convolutional neural networks; Lightweight; Model compression; Quantization;
D O I
10.7527/S1000-6893.2020.24504
中图分类号
学科分类号
摘要
In recent years, Binary Convolutional Neural Networks (BNNs) have attracted much attention owing to their low storage and high computational efficiency. However, the mismatch between forward and backward quantization results in a huge performance gap between the BNN and the full-precision convolutional neural network, affecting the deployment of the BNN on resource-constrained platforms. Researchers have proposed a series of algorithms and training methods to reduce the performance gap during the binarization process, thereby promoting the application of BNNs to embedded portable devices. This paper makes a comprehensive review of BNNs, mainly from the perspectives of improving network representative capabilities and fully exploring the network training potential. Specifically, improving network representative capabilities includes the design of the binary quantization method and structure design, while fully exploring the network training potential involves loss function design and the training strategy. Finally, we discuss the performance of BNNs in different tasks and hardware platforms, and summarize the challenges in future research. © 2021, Beihang University Aerospace Knowledge Press. All right reserved.
引用
收藏
相关论文
共 55 条
  • [31] QIN H T, GONG R H, LIU X L, Et al., Forward and backward information retention for accurate binary neural networks, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2250-2259, (2020)
  • [32] WANG Z W, WU Z Y, LU J W, Et al., BiDet: An efficient binarized object detector, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2020)
  • [33] LIU H X, SIMONYAN K, YANG Y M., Darts: Differentiable architecture search, (2018)
  • [34] CHEN H L, ZHUO L A, ZHANG B C, Et al., Binarized neural architecture search, Proceeding of the Conference of Association for the Advance of Artificial Intelligence, (2020)
  • [35] SHEN M Z, HAN K, XU C J, Et al., Searching for accurate binary neural architectures, Proceedings of the IEEE International Conference on Computer Vision Workshops, (2019)
  • [36] HELWEGEN K, WIDDICOMBE J, GEIGER L, Et al., Latent weights do not exist: Rethinking binarized neural network optimization, Proceeding of the Conference of Advances in Neural Information Processing Systems, pp. 7531-7542, (2019)
  • [37] FRIESEN L A, DOMINGOS P., Deep learning as a mixed convex-combinatorial optimization problem, Proceeding of the International Conference on Learning Representations, (2018)
  • [38] LECUN Y, BOTTOU L, BENGIO Y, Et al., Gradient-based learning applied to document recognition, Proceedings of the IEEE, pp. 2278-2324, (1998)
  • [39] NETZER Y, WANG T, COATES A, Et al., Reading digits in natural images with unsupervised feature learning, Proceeding of Neural Information Processing Systems Workshop, (2011)
  • [40] KRIZHEVSKY N, HINTON, The Cifar-10 Dataset