Domain adaptation and knowledge distillation for lightweight pavement crack detection

被引:0
|
作者
Xiao, Tianhao [1 ]
Pang, Rong [3 ,4 ]
Liu, Huijun [1 ]
Yang, Chunhua [1 ]
Li, Ao [2 ]
Niu, Chenxu [1 ]
Ruan, Zhimin [5 ]
Xu, Ling [2 ]
Ge, Yongxin [2 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
[2] Chongqing Univ, Sch Big Data & Software Engn, Chongqing 401331, Peoples R China
[3] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Chengdu 611756, Peoples R China
[4] China Merchants Chongqing Rd Engn Inspect Ctr Co L, Chongqing 400067, Peoples R China
[5] China Merchants Chongqing Commun Technol Res & Des, Chongqing 400067, Peoples R China
基金
中国国家自然科学基金;
关键词
Pavement crack detection; Knowledge distillation; Lightweight model; Domain adaptation;
D O I
10.1016/j.eswa.2024.125734
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pavement crack detection is crucial for maintaining safe driving conditions; thus, the timely and accurate detection of cracks is of considerable importance. However, although deep neural networks (DNNs) have performed well in pavement crack detection, their dependence on large-scale labeled datasets, excessive model parameters, and high computational costs limit their application at the edge or on mobile devices. The conventional approaches concentrate on domain adaptation to leverage unlabeled data but overlook the domain shift issue, which can lead to performance degradation and is noticeable in lightweight models. Therefore, we propose a lightweight deep domain-adaptive crack detection network (L-DDACDN) to address these issues. Specifically, a novel distillation loss method that incorporates domain information, which facilitates the transfer of knowledge from a teacher model to a student model, is introduced. Additionally, L-DDACDN imitates the feature responses of a teacher model near the object anchor locations, ensuring that the student model effectively learns crucial features, thus addressing the domain shift issue and maintaining performance in lightweight models. Experimental results show that compared with the deep domain-adaptive crack detection network (DDACDN) trained with a large-scale pre-trained model, L-DDACDN has an average loss of only 3.5% and 3.9% in F1-scores and Accuracy, respectively. In contrast, the model parameters and FLOPs are reduced by approximately 92%. Additionally, compared to the YOLOv5, L-DDACDN demonstrates a notable improvement in the F1-scores and Accuracy on the CQU-BPDD dataset, revealing an average increase of 5% and 1.8% in F1-scores and Accuracy, respectively.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Representation Learning and Knowledge Distillation for Lightweight Domain Adaptation
    Bin Shah, Sayed Rafay
    Putty, Shreyas Subhash
    Schwung, Andreas
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 1202 - 1207
  • [2] Deep Domain Adaptation for Pavement Crack Detection
    Liu, Huijun
    Yang, Chunhua
    Li, Ao
    Huang, Sheng
    Feng, Xin
    Ruan, Zhimin
    Ge, Yongxin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (02) : 1669 - 1681
  • [3] Multi-Scale Semantic Map Distillation for Lightweight Pavement Crack Detection
    Wang, Xin
    Mao, Zhaoyong
    Liang, Zhiwei
    Shen, Junge
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 15081 - 15093
  • [4] A lightweight crack segmentation network based on knowledge distillation
    Wang, Wenjun
    Su, Chao
    Han, Guohui
    Zhang, Heng
    JOURNAL OF BUILDING ENGINEERING, 2023, 76
  • [5] Knowledge distillation for BERT unsupervised domain adaptation
    Ryu, Minho
    Lee, Geonseok
    Lee, Kichun
    KNOWLEDGE AND INFORMATION SYSTEMS, 2022, 64 (11) : 3113 - 3128
  • [6] Knowledge distillation for BERT unsupervised domain adaptation
    Minho Ryu
    Geonseok Lee
    Kichun Lee
    Knowledge and Information Systems, 2022, 64 : 3113 - 3128
  • [7] Unsupervised domain adaptation for crack detection
    Weng, Xingxing
    Huang, Yuchun
    Li, Yanan
    Yang, He
    Yu, Shaohuai
    AUTOMATION IN CONSTRUCTION, 2023, 153
  • [8] Knowledge Distillation for Semi-supervised Domain Adaptation
    Orbes-Arteainst, Mauricio
    Cardoso, Jorge
    Sorensen, Lauge
    Igel, Christian
    Ourselin, Sebastien
    Modat, Marc
    Nielsen, Mads
    Pai, Akshay
    OR 2.0 CONTEXT-AWARE OPERATING THEATERS AND MACHINE LEARNING IN CLINICAL NEUROIMAGING, 2019, 11796 : 68 - 76
  • [9] A Lightweight Malware Detection Model Based on Knowledge Distillation
    Miao, Chunyu
    Kou, Liang
    Zhang, Jilin
    Dong, Guozhong
    MATHEMATICS, 2024, 12 (24)
  • [10] Joint Progressive Knowledge Distillation and Unsupervised Domain Adaptation
    Nguyen-Meidine, Le Thanh
    Granger, Eric
    Kiran, Madhu
    Dolz, Jose
    Blais-Morin, Louis-Antoine
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,