Visualization Analysis and Kernel Pruning of Convolutional Neural Network for Ship-Radiated Noise Classification

被引:0
|
作者
Xu Y. [1 ]
Cai Z. [1 ]
Kong X. [1 ]
Huang Y. [1 ]
机构
[1] College of Electronic Engineering, Naval University of Engineering, Wuhan
关键词
Convolutional Neural Network(CNN); Guided backward propagation; Neural network pruning; Ship-radiated noise classification; Visualization analysis;
D O I
10.11999/JEIT230149
中图分类号
学科分类号
摘要
Current research on the classification of ship-radiated noise utilizing deep neural networks primarily focuses on aspects of classification performance and disregards model interpretation. To address this issue, an approach involving guided backwardpropagation and input space optimization has been utilized to develop a Convolutional Neural Network (CNN) for ship-radiated noise classification. This CNN takes a logarithmic scale spectrum as input and is based on the DeepShip dataset, thus presenting a visualization method for ship-radiated noise classification. Results reveal that the multiframe feature alignment algorithm enhances the visualization effect, and the deep convolutional kernel detects two types of features: line spectrum and background. Notably, the line spectrum has been identified as a reliable feature for ship classification. Therefore, a convolutional kernel pruning method has been proposed. This approach not only enhances the performance of CNN classification, but also enhances the stability of the training process. The results of the guided backwardpropagation visualization suggest that the post-pruning CNN increasingly emphasizes the consideration of line spectrum information. © 2024 Journal of Pattern Recognition and Artificial Intelligence. All rights reserved.
引用
收藏
页码:74 / 82
页数:8
相关论文
共 19 条
  • [1] SHEN Sheng, YANG Honghui, LI Junhao, Et al., Auditory inspired convolutional neural networks for ship type classification with raw hydrophone data, Entropy, 20, 12, (2018)
  • [2] HU Gang, WANG Kejun, PENG Yuan, Et al., Deep learning methods for underwater target feature extraction and recognition, Computational Intelligence and Neuroscience, 2018, (2018)
  • [3] LI Junhao, YANG Honghui, The underwater acoustic target timbre perception and recognition based on the auditory inspired deep convolutional neural network, Applied Acoustics, 182, (2021)
  • [4] CHEN Yuechao, SHANG Jintao, Underwater target recognition method based on convolution autoencoder[C], Proceedings of 2019 IEEE International Conference on Signal, Information and Data Processing, pp. 1-5, (2019)
  • [5] CHEN Jie, HAN Bing, MA Xufeng, Et al., Underwater target recognition based on multi-decision LOFAR spectrum enhancement: A deep-learning approach, Future Internet, 13, 10, (2021)
  • [6] ZHANG Qi, DA Lianglong, ZHANG Yanhou, Et al., Integrated neural networks based on feature fusion for underwater target recognition, Applied Acoustics, 182, (2021)
  • [7] GOODFELLOW I, BENGIO Y, COURVILLE A, ZHAO Shenjian, LI Yujun, FU Tianfan, Et al., translation. Deep Learning, pp. 224-225, (2017)
  • [8] ZEILER M D, FERGUS R., Visualizing and understanding convolutional networks[C], Proceedings of the 13th European Conference on Computer Vision, pp. 818-833, (2014)
  • [9] SPRINGENBERG J T, DOSOVITSKIY A, BROX T, Et al., Striving for simplicity: The all convolutional net, (2015)
  • [10] SIMONYAN K, VEDALDI A, ZISSERMAN A., Deep inside convolutional networks: Visualising image classification models and saliency maps, (2014)