Complexity of Deep Convolutional Neural Networks in Mobile Computing

被引:3
|
作者
Naeem, Saad [1 ]
Jamil, Noreen [1 ]
Khan, Habib Ullah [2 ]
Nazir, Shah [3 ]
机构
[1] Natl Univ Comp & Emerging Sci, Dept Comp Sci, Islamabad, Pakistan
[2] Qatar Univ, Coll Business & Econ, Dept Accounting & Informat Syst, Doha, Qatar
[3] Univ Swabi, Dept Comp Sci, Swabi, Pakistan
关键词
Convolutional neural networks - Deep neural networks - Encoding (symbols) - Signal encoding - Digital storage;
D O I
10.1155/2020/3853780
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Neural networks employ massive interconnection of simple computing units called neurons to compute the problems that are highly nonlinear and could not be hard coded into a program. These neural networks are computation-intensive, and training them requires a lot of training data. Each training example requires heavy computations. We look at different ways in which we can reduce the heavy computation requirement and possibly make them work on mobile devices. In this paper, we survey various techniques that can be matched and combined in order to improve the training time of neural networks. Additionally, we also review some extra recommendations to make the process work for mobile devices as well. We finally survey deep compression technique that tries to solve the problem by network pruning, quantization, and encoding the network weights. Deep compression reduces the time required for training the network by first pruning the irrelevant connections, i.e., the pruning stage, which is then followed by quantizing the network weights via choosing centroids for each layer. Finally, at the third stage, it employs Huffman encoding algorithm to deal with the storage issue of the remaining weights.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] On the Reduction of Computational Complexity of Deep Convolutional Neural Networks
    Maji, Partha
    Mullins, Robert
    ENTROPY, 2018, 20 (04)
  • [2] An Efficient Implementation of Deep Convolutional Neural Networks on a Mobile Coprocessor
    Jin, Jonghoon
    Gokhale, Vinayak
    Dundar, Aysegul
    Krishnamurthy, Bharadwaj
    Martini, Berin
    Culurciello, Eugenio
    2014 IEEE 57TH INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2014, : 133 - 136
  • [3] Mobile Stride Length Estimation With Deep Convolutional Neural Networks
    Hannink, Julius
    Kautz, Thomas
    Pasluosta, Cristian F.
    Barth, Jens
    Schuelein, Samuel
    Gassmann, Karl-Guenter
    Klucken, Jochen
    Eskofier, Bjoern M.
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2018, 22 (02) : 354 - 362
  • [4] Towards Acceleration of Deep Convolutional Neural Networks using Stochastic Computing
    Li, Ji
    Ren, Ao
    Li, Zhe
    Ding, Caiwen
    Yuan, Bo
    Qiu, Qinru
    Wang, Yanzhi
    2017 22ND ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2017, : 115 - 120
  • [5] A New Stochastic Computing Multiplier with Application to Deep Convolutional Neural Networks
    Sim, Hyeonuk
    Lee, Jongeun
    PROCEEDINGS OF THE 2017 54TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2017,
  • [6] Reduce Computing Complexity of Deep Neural Networks Through Weight Scaling
    Tolba, Mohammed F.
    Saleh, Hani
    Al-Qutayri, Mahmoud
    Mohammad, Baker
    2022 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 22), 2022, : 1249 - 1253
  • [7] Mobile Perimeter Guard System based on Deep Convolutional Neural Networks
    Yang, Zhixiang
    Cao, Wanhua
    Hu, Jiarui
    Li, Quan
    Liu, Hao
    2022 IEEE 6TH ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC), 2022, : 1298 - 1304
  • [8] Energy Complexity of Convolutional Neural Networks
    Sima, Jiri
    Vidnerova, Petra
    Mrazek, Vojtech
    NEURAL COMPUTATION, 2024, 36 (08) : 1601 - 1625
  • [9] Low power & mobile hardware accelerators for deep convolutional neural networks
    Scanlan, Anthony G.
    INTEGRATION-THE VLSI JOURNAL, 2019, 65 : 110 - 127
  • [10] Deep Convolutional Neural Networks
    Gonzalez, Rafael C.
    IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (06) : 79 - 87