Pluggable Micronetwork for Layer Configuration Relay in a Dynamic Deep Neural Surface

被引:1
|
作者
Khan, Farhat Ullah [1 ]
Aziz, Izzatdin B. [1 ]
Akhir, Emilia Akashah P. [1 ]
机构
[1] Univ Teknol Petronas, Ctr Res Data Sci CERDAS, Seri Iskander 31750, Perak, Malaysia
来源
IEEE ACCESS | 2021年 / 9卷
关键词
Convolution; Training; Logic gates; Feature extraction; Computational modeling; Adaptive systems; Relays; Convolution neural network; deep learning; dynamic neural structure; micronetwork; multilayer perceptron;
D O I
10.1109/ACCESS.2021.3110709
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The classical convolution neural network architecture adheres to static declaration procedures, which means that the shape of computation is usually predefined and the computation graph is fixed. In this research, the concept of a pluggable micronetwork, which relaxes the static declaration constraint by dynamic layer configuration relay, is proposed. The micronetwork consists of several parallel convolutional layer configurations and relays only the layer settings, incurring a minimum loss. The configuration selection logic is based on the conditional computation method, which is implemented as an output layer of the proposed micronetwork. The proposed micronetwork is implemented as an independent pluggable unit and can be used anywhere on the deep learning decision surface with no or minimal configuration changes. The MNIST, FMNIST, CIFAR-10 and STL-10 datasets have been used to validate the proposed research. The proposed technique is proven to be efficient and achieves appropriate validity of the research by obtaining state-of-the-art performance in fewer iterations with wider and compact convolution models. We also naively attempt to discuss the involved computational complexities in these advanced deep neural structures.
引用
收藏
页码:124831 / 124846
页数:16
相关论文
共 50 条
  • [41] A Statistical Approach for the Best Deep Neural Network Configuration for Arabic Language Processing
    Saadi, Abdelhalim
    Belhadef, Hacene
    MODELLING AND IMPLEMENTATION OF COMPLEX SYSTEMS, 2019, 64 : 204 - 218
  • [42] Wide Hidden Expansion Layer for Deep Convolutional Neural Networks
    Wang, Min
    Liu, Baoyuan
    Foroosh, Hassan
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 923 - 931
  • [43] Intrusion Detection Using Deep Neural Network with AntiRectifier Layer
    Lohiya, Ritika
    Thakkar, Ankit
    APPLIED SOFT COMPUTING AND COMMUNICATION NETWORKS, 2021, 187 : 89 - 105
  • [44] Design and Implementation of an Approximate Softmax Layer for Deep Neural Networks
    Gao, Yue
    Liu, Weiqiang
    Lombardi, Fabrizio
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [45] Efficient Hardware Architecture of Softmax Layer in Deep Neural Network
    Hu, Ruofei
    Tian, Binren
    Yin, Shouyi
    Wei, Shaojun
    2018 IEEE 23RD INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2018,
  • [46] Stochastic Layer-Wise Precision in Deep Neural Networks
    Lacey, Griffin
    Taylor, Graham W.
    Areibi, Shawki
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2018, : 663 - 672
  • [47] Efficient Hardware Architecture of Softmax Layer in Deep Neural Network
    Yuan, Bo
    2016 29TH IEEE INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (SOCC), 2016, : 323 - 326
  • [48] IDENTIFYING SPACECRAFT CONFIGURATION USING DEEP NEURAL NETWORKS FOR PRECISE ORBIT ESTIMATION
    Tiwari, Madhur
    Zuehlke, David
    Henderson, Troy A.
    SPACEFLIGHT MECHANICS 2019, VOL 168, PTS I-IV, 2019, 168 : 127 - 137
  • [49] Layer-Parallel Training of Deep Residual Neural Networks
    Guenther, Stefanie
    Ruthotto, Lars
    Schroder, Jacob B.
    Cyr, Eric C.
    Gauger, Nicolas R.
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2020, 2 (01): : 1 - 23
  • [50] Privacy preserving layer partitioning for Deep Neural Network models
    Rajasekar, Kishore
    Loh, Randolph
    Fok, Kar Wai
    Thing, Vrizlynn L. L.
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 1129 - 1135