Weakly supervised crack segmentation using crack attention networks on concrete structures

被引:4
|
作者
Mishra, Anoop [1 ]
Gangisetti, Gopinath [1 ]
Azam, Yashar Eftekhar [2 ]
Khazanchi, Deepak [1 ]
机构
[1] Univ Nebraska, 6001 Dodge St, Omaha, NH 68182 USA
[2] Univ New Hampshire, Durham, NH USA
来源
STRUCTURAL HEALTH MONITORING-AN INTERNATIONAL JOURNAL | 2024年 / 23卷 / 06期
基金
美国国家科学基金会;
关键词
Structural health monitoring; machine learning; weakly supervised learning; image labels; crack detection; IDENTIFICATION;
D O I
10.1177/14759217241228150
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Crack detection or segmentation on concrete structures is a vital process in structural health monitoring (SHM). Though supervised machine learning techniques have gained tremendous success in this domain, data collection and annotation continue to be challenging. Image data collection is challenging, tedious, and laborious, including accessing representative datasets and manually labeling training data in the SHM domain. According to the literature, there are significant issues with the hand-annotation of image data. To address this gap, this paper proposes a two-stage weakly supervised learning framework utilizing a novel "crack attention network (CrANET)" with attention mechanism to detect and segment cracks on images with no human annotations in pixel-level labels. This framework classifies concrete surface images into crack or no-cracks and then uses gradient class activation mapping visualization to generate crack segmentation. Professionals and domain experts subsequently evaluate these segmentation results via a human expert validation study. As the literature suggests that weakly supervised learning is a limited practice in SHM, this research title will motivate researchers in SHM to research and develop a weakly supervised learning approach processing as state of the art.
引用
收藏
页码:3748 / 3777
页数:30
相关论文
共 50 条
  • [1] Learning position information from attention: End-to-end weakly supervised crack segmentation with GANs
    Liu, Ye
    Chen, Jun
    Hou, Jia-ao
    COMPUTERS IN INDUSTRY, 2023, 149
  • [2] Pixel-level tunnel crack segmentation using a weakly supervised annotation approach
    Wang, Hanxiang
    Li, Yanfen
    Dang, L. Minh
    Lee, Sujin
    Moon, Hyeonjoon
    COMPUTERS IN INDUSTRY, 2021, 133
  • [3] AugMoCrack: Augmented morphological attention network for weakly supervised crack detection
    Hong, Younggi
    Lee, Sung-Jin
    Yoo, Seok Bong
    ELECTRONICS LETTERS, 2022, 58 (17) : 651 - 653
  • [4] Unified weakly and semi-supervised crack segmentation framework using limited coarse labels
    Xiang, Chao
    Gan, Vincent J. L.
    Deng, Lu
    Guo, Jingjing
    Xu, Shaopeng
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [5] Weakly-Supervised Crack Detection
    Inoue, Yuki
    Nagayoshi, Hiroto
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (11) : 12050 - 12061
  • [6] CrackCLIP: Adapting Vision-Language Models for Weakly Supervised Crack Segmentation
    Liang, Fengjiao
    Li, Qingyong
    Yu, Haomin
    Wang, Wen
    ENTROPY, 2025, 27 (02)
  • [7] Patch-based weakly supervised semantic segmentation network for crack detection
    Dong, Zhiming
    Wang, Jiajun
    Cui, Bo
    Wang, Dong
    Wang, Xiaoling
    CONSTRUCTION AND BUILDING MATERIALS, 2020, 258
  • [8] WEAKLY SUPERVISED INSTANCE SEGMENTATION USING HYBRID NETWORKS
    Liao, Shisha
    Sun, Yongqing
    Gao, Chenqiang
    Shenoy, Pranav K. P.
    Mu, Song
    Shimamura, Jun
    Sagata, Atsushi
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1917 - 1921
  • [9] CAC: Confidence-Aware Co-Training for Weakly Supervised Crack Segmentation
    Liang, Fengjiao
    Li, Qingyong
    Li, Xiaobao
    Liu, Yang
    Wang, Wen
    ENTROPY, 2024, 26 (04)
  • [10] Automatic Crack Detection Using Weakly Supervised Semantic Segmentation Network and Mixed-Label Training Strategy
    Zhang, Shuyuan
    Xu, Hongli
    Zhu, Xiaoran
    Xie, Lipeng
    FOUNDATIONS OF COMPUTING AND DECISION SCIENCES, 2024, 49 (01) : 95 - 118