Design and Analysis of Convolutional Neural Layers: A Signal Processing Perspective

被引:3
|
作者
Farag, Mohammed M. M. [1 ,2 ]
机构
[1] King Faisal Univ, Coll Engn, Elect Engn Dept, Al Hasa, Saudi Arabia
[2] Alexandria Univ, Fac Engn, Elect Engn Dept, Alexandria 21544, Egypt
关键词
Computational modeling; Feature extraction; Machine learning; Task analysis; Convolutional neural networks; Mathematical models; Finite impulse response filters; Fault diagnosis; signal processing; convolutional layer; interpretable neural networks; machinery fault diagnosis; DEEP; CLASSIFICATION;
D O I
10.1109/ACCESS.2023.3258399
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional layers (CLs) are ubiquitous in contemporary deep neural network (DNN) models, commonly used for automatic feature extraction. A CL performs cross-correlation between the input to the layer and a set of learnable kernels to produce the layer output. Typically, kernel weights are randomly initialized and automatically learned during model training using the backpropagation and gradient descent algorithms to minimize a specific loss function. Modern DNN models comprise deep hierarchical stacks of CLs and pooling layers. Despite their prevalence, CLs are perceived as a magical tool for feature extraction without solid interpretations of their underlying working principles. In this work, we advance a method for designing and analyzing CLs by providing novel signal processing interpretations of the CL by exploiting the correlation and equivalent convolution functions of the layer. The proposed interpretations enable the employment of CLs to develop finite impulse response (FIR) filters, matched filters (MFs), short-time Fourier transform (STFT), discrete-time Fourier transform (DTFT), and continuous wavelet transform (CWT) algorithms. The main idea is to pre-assign the CL kernel weights to implement a specific convolution- or correlation-based DSP algorithm. Such an approach enables building self-contained DNN models in which CLs are utilized for various preprocessing and feature extractions tasks, enhancing the model portability, and cutting down the preprocessing computational cost. The proposed DSP interpretations provide an effective means to analyze and explain the operation of automatically trained CLs in the time and frequency domains by reversing the design procedures. The presented interpretations are mathematically established and experimentally validated with a comprehensive machinery fault diagnosis application example illustrating the potential of the proposed methodology.
引用
收藏
页码:27641 / 27661
页数:21
相关论文
共 50 条
  • [41] A Primer on Neural Signal Processing
    Chen, Zhe
    IEEE CIRCUITS AND SYSTEMS MAGAZINE, 2017, 17 (01) : 33 - 50
  • [42] Systematic analysis of wavelet denoising methods for neural signal processing
    Baldazzi, Giulia
    Solinas, Giuliana
    Del Valle, Jaume
    Barbaro, Massimo
    Micera, Silvestro
    Raffo, Luigi
    Pani, Danilo
    JOURNAL OF NEURAL ENGINEERING, 2020, 17 (06)
  • [43] Indoor Positioning Using Correlation Based Signal Analysis and Convolutional Neural Networks
    Bizon, Ivo
    Nimr, Ahmad
    Fettweis, Gerhard
    Chafii, Marwa
    2024 19TH INTERNATIONAL SYMPOSIUM ON WIRELESS COMMUNICATION SYSTEMS, ISWCS 2024, 2024, : 126 - 131
  • [44] Convolutional Neural Network-Based EEG Signal Analysis: A Systematic Review
    Rajwal, Swati
    Aggarwal, Swati
    ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, 2023, 30 (06) : 3585 - 3615
  • [45] Convolutional Neural Network-Based EEG Signal Analysis: A Systematic Review
    Swati Rajwal
    Swati Aggarwal
    Archives of Computational Methods in Engineering, 2023, 30 : 3585 - 3615
  • [46] A Mixed Signal Architecture for Convolutional Neural Networks
    Lou, Qiuwen
    Pan, Chenyun
    McGuinness, John
    Horvath, Andras
    Naeemi, Azad
    Niemier, Michael
    Hu, X. Sharon
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2019, 15 (02)
  • [47] CONVOLUTIONAL NEURAL NETWORKS FOR NOISE SIGNAL RECOGNITION
    Portsev, Ruslan J.
    Makarenko, Andrey V.
    2018 IEEE 28TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2018,
  • [48] A study on the uncertainty of convolutional layers in deep neural networks
    Haojing Shen
    Sihong Chen
    Ran Wang
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 1853 - 1865
  • [49] Stochastic Selection of Activation Layers for Convolutional Neural Networks
    Nanni, Loris
    Lumini, Alessandra
    Ghidoni, Stefano
    Maguolo, Gianluca
    SENSORS, 2020, 20 (06)
  • [50] A study on the uncertainty of convolutional layers in deep neural networks
    Shen, Haojing
    Chen, Sihong
    Wang, Ran
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (06) : 1853 - 1865