GDALR: Global Dual Attention and Local Representations in transformer for surface defect detection

被引:9
|
作者
Zhou, Xin [1 ]
Zhou, Shihua [1 ]
Zhang, Yongchao [1 ]
Ren, Zhaohui [1 ]
Jiang, Zeyu [1 ]
Luo, Hengfa [1 ]
机构
[1] Northeastern Univ, Sch Mech Engn & Automat, Wenhua Rd, Shenyang 110819, Liaoning, Peoples R China
关键词
Surface defect detection; Semantic segmentation; Vision transformer; Dual-attention; Local transformer;
D O I
10.1016/j.measurement.2024.114398
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Automated surface detection has gradually emerged as a promising and crucial inspection method in the industrial sector, greatly enhancing production quality and efficiency. However, current semantic network models based on Vision Transformers are primarily trained on natural images, which exhibit complex object textures and backgrounds. Additionally, pure Vision Transformers lack the ability to capture local representations, making it challenging to directly apply existing semantic segmentation models to industrial production scenarios. In this paper, we propose a novel transformer segmentation model specifically designed for surface defect detection in industrial settings. Firstly, we employ a Dual -Attention Transformer (DAT) as the backbone of our model. This backbone replaces the generic 2D convolution block with a new self -attention block in the Spatial Reduction Attention module (SRA), enabling the establishment of a global view for each layer. Secondly, we enhance the collection of local information during decoding by initializing the relative position between query and key pixels. Finally, to strengthen the salient defect structure, we utilize Pixel Shuffle to rearrange the Ground Truth (GT) in order to guide the feature maps at each scale. Extensive experiments are conducted on three publicly industrial datasets, and evaluation results describe the outstanding performance of our network in surface defect detection.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] A hierarchical attention detector for bearing surface defect detection
    Ma, Jiajun
    Hu, Songyu
    Fu, Jianzhong
    Chen, Gui
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 239
  • [32] Sparse cross-transformer network for surface defect detection
    Huang, Xiaohua
    Li, Yang
    Bao, Yongqiang
    Zhu, Xiaochun
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [33] Adaptive Cross Transformer With Contrastive Learning for Surface Defect Detection
    Huang, Xiaohua
    Li, Yang
    Bao, Yongqiang
    Zheng, Wenming
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [34] Tire Defect Detection Using Local and Global Features
    XIANG Yuan-yuan
    Computer Aided Drafting,Design and Manufacturing, 2013, (04) : 49 - 52
  • [35] DCAT: Dual Cross-Attention-Based Transformer for Change Detection
    Zhou, Yuan
    Huo, Chunlei
    Zhu, Jiahang
    Huo, Leigang
    Pan, Chunhong
    REMOTE SENSING, 2023, 15 (09)
  • [36] Attention dual transformer with adaptive temporal convolutional for diabetic retinopathy detection
    Mishmala Sushith
    Ajanthaa Lakkshmanan
    M. Saravanan
    S. Castro
    Scientific Reports, 15 (1)
  • [37] Cas-VSwin transformer: A variant swin transformer for surface-defect detection
    Gao, Linfeng
    Zhang, Jianxun
    Yang, Changhui
    Zhou, Yuechuan
    COMPUTERS IN INDUSTRY, 2022, 140
  • [38] A real-time anchor-free defect detector with global and local feature enhancement for surface defect detection
    Liu, Qing
    Liu, Min
    Jonathan, Q. M.
    Shen, Weiming
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 246
  • [39] ETDNet: Efficient Transformer-Based Detection Network for Surface Defect Detection
    Zhou, Hantao
    Yang, Rui
    Hu, Runze
    Shu, Chang
    Tang, Xiaochu
    Li, Xiu
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [40] Combining transformer global and local feature extraction for object detection
    Li, Tianping
    Zhang, Zhenyi
    Zhu, Mengdi
    Cui, Zhaotong
    Wei, Dongmei
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (04) : 4897 - 4920