Deep reference autoencoder convolutional neural network for damage identification in parallel steel wire cables

被引:3
|
作者
Xue, Songling [1 ,2 ]
Sun, Yidan [1 ]
Su, Teng [1 ]
Zhao, Xiaoqing [1 ]
机构
[1] Jiangsu Ocean Univ, Sch Civil & Ocean Engn, Lianyungang 222005, Peoples R China
[2] Southwest Jiaotong Univ, Sch Civil Engn, Chengdu 610031, Peoples R China
基金
中国国家自然科学基金;
关键词
Unsupervised deep learning; Damage identification; DRACNN; Parallel steel wire cable; LOCALIZATION;
D O I
10.1016/j.istruc.2023.105316
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
This paper addresses the challenge of overcoming the limited availability of training samples for the damage state of parallel steel wire cables in practical engineering and the difficulty of detecting minor damages. To tackle this issue, we propose an unsupervised deep learning damage identification technique called Deep Reference Autoencoder Convolutional Neural Network (DRACNN) for analyzing the damage state of parallel steel wire cables in bridge engineering. The DRACNN method utilizes multi-dimensional cross-correlation function (CCF) derived from acceleration signals at various health stages as input to train the network structure and obtain optimal parameters. Subsequently, we analyze the layer decomposition to identify neurons in the lowest hidden layer indicating damage. The neuronal change information is then extracted using an Exponentially Weighted Moving Average (EWMA) Control Chart to determine the damage state of the structure. Finally, we present a comprehensive numerical analysis describing the method's flow and network architecture and demonstrate the feasibility of this approach through experiments.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Structural Damage Identification Using Ensemble Deep Convolutional Neural Network Models
    Barkhordari, Mohammad Sadegh
    Armaghani, Danial Jahed
    Asteris, Panagiotis G.
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 134 (02): : 835 - 855
  • [2] Deep Convolutional Neural Network with Deconvolution and a Deep Autoencoder for Fault Detection and Diagnosis
    Kanno, Yasuhiro
    Kaneko, Hiromasa
    ACS OMEGA, 2022, 7 (02): : 2458 - 2466
  • [3] Parallel convolutional neural network toward high efficiency and robust structural damage identification
    Ye, Xijun
    Cao, Yongjie
    Liu, Airong
    Wang, Xinwei
    Zhao, Yinghao
    Hu, Nan
    STRUCTURAL HEALTH MONITORING-AN INTERNATIONAL JOURNAL, 2023, 22 (06): : 3805 - 3826
  • [4] Damage identification for mining wire rope based on continuous wavelet transform and convolutional neural network
    Tian, Jie
    Zhao, Chun
    Wang, Hongyao
    NONDESTRUCTIVE TESTING AND EVALUATION, 2024,
  • [5] Structural damage identification based on autoencoder neural networks and deep learning
    Pathirage, Chathurdara Sri Nadith
    Li, Jun
    Li, Ling
    Hao, Hong
    Liu, Wanquan
    Ni, Pinghe
    ENGINEERING STRUCTURES, 2018, 172 : 13 - 28
  • [6] Bangla Handwritten Digit Recognition Using Autoencoder and Deep Convolutional Neural Network
    Shopon, Md
    Mohammed, Nabeel
    Abedin, Md Anowarul
    2016 INTERNATIONAL WORKSHOP ON COMPUTATIONAL INTELLIGENCE (IWCI), 2016, : 63 - 67
  • [7] Deep Feature Learning for Medical Image Analysis with Convolutional Autoencoder Neural Network
    Chen, Min
    Shi, Xiaobo
    Zhang, Yin
    Wu, Di
    Guizani, Mohsen
    IEEE TRANSACTIONS ON BIG DATA, 2021, 7 (04) : 750 - 758
  • [8] Seismic random noise suppression using deep convolutional autoencoder neural network
    Song, Hui
    Gao, Yang
    Chen, Wei
    Xue, Ya-juan
    Zhang, Hua
    Zhang, Xiang
    JOURNAL OF APPLIED GEOPHYSICS, 2020, 178
  • [9] Skin Identification Using Deep Convolutional Neural Network
    Oghaz, Mahdi Maktab Dar
    Argyriou, Vasileios
    Monekosso, Dorothy
    Remagnino, Paolo
    ADVANCES IN VISUAL COMPUTING, ISVC 2019, PT I, 2020, 11844 : 181 - 193
  • [10] Ensemble of feature augmented convolutional neural network and deep autoencoder for efficient detection of network attacks
    Selvakumar, B.
    Sivaanandh, M.
    Muneeswaran, K.
    Lakshmanan, B.
    SCIENTIFIC REPORTS, 2025, 15 (01):