Domain adaptation and knowledge distillation for lightweight pavement crack detection

被引:0
|
作者
Xiao, Tianhao [1 ]
Pang, Rong [3 ,4 ]
Liu, Huijun [1 ]
Yang, Chunhua [1 ]
Li, Ao [2 ]
Niu, Chenxu [1 ]
Ruan, Zhimin [5 ]
Xu, Ling [2 ]
Ge, Yongxin [2 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
[2] Chongqing Univ, Sch Big Data & Software Engn, Chongqing 401331, Peoples R China
[3] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Chengdu 611756, Peoples R China
[4] China Merchants Chongqing Rd Engn Inspect Ctr Co L, Chongqing 400067, Peoples R China
[5] China Merchants Chongqing Commun Technol Res & Des, Chongqing 400067, Peoples R China
基金
中国国家自然科学基金;
关键词
Pavement crack detection; Knowledge distillation; Lightweight model; Domain adaptation;
D O I
10.1016/j.eswa.2024.125734
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pavement crack detection is crucial for maintaining safe driving conditions; thus, the timely and accurate detection of cracks is of considerable importance. However, although deep neural networks (DNNs) have performed well in pavement crack detection, their dependence on large-scale labeled datasets, excessive model parameters, and high computational costs limit their application at the edge or on mobile devices. The conventional approaches concentrate on domain adaptation to leverage unlabeled data but overlook the domain shift issue, which can lead to performance degradation and is noticeable in lightweight models. Therefore, we propose a lightweight deep domain-adaptive crack detection network (L-DDACDN) to address these issues. Specifically, a novel distillation loss method that incorporates domain information, which facilitates the transfer of knowledge from a teacher model to a student model, is introduced. Additionally, L-DDACDN imitates the feature responses of a teacher model near the object anchor locations, ensuring that the student model effectively learns crucial features, thus addressing the domain shift issue and maintaining performance in lightweight models. Experimental results show that compared with the deep domain-adaptive crack detection network (DDACDN) trained with a large-scale pre-trained model, L-DDACDN has an average loss of only 3.5% and 3.9% in F1-scores and Accuracy, respectively. In contrast, the model parameters and FLOPs are reduced by approximately 92%. Additionally, compared to the YOLOv5, L-DDACDN demonstrates a notable improvement in the F1-scores and Accuracy on the CQU-BPDD dataset, revealing an average increase of 5% and 1.8% in F1-scores and Accuracy, respectively.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Rethinking Lightweight Convolutional Neural Networks for Efficient and High-Quality Pavement Crack Detection
    Li, Kai
    Yang, Jie
    Ma, Siwei
    Wang, Bo
    Wang, Shanshe
    Tian, Yingjie
    Qi, Zhiquan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (01) : 237 - 250
  • [42] Real-time pavement surface crack detection based on lightweight semantic segmentation model
    Yu, Huayang
    Deng, Yihao
    Guo, Feng
    TRANSPORTATION GEOTECHNICS, 2024, 48
  • [43] A lightweight deep learning network based on knowledge distillation for applications of efficient crack segmentation on embedded devices
    Chen, Jun
    Liu, Ye
    Hou, Jia-ao
    STRUCTURAL HEALTH MONITORING-AN INTERNATIONAL JOURNAL, 2023, 22 (05): : 3027 - 3046
  • [44] Learning Lightweight Face Detector with Knowledge Distillation
    Jin, Haibo
    Zhang, Shifeng
    Zhu, Xiangyu
    Tang, Yinhang
    Lei, Zhen
    Li, Stan Z.
    2019 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), 2019,
  • [45] Structured Attention Knowledge Distillation for Lightweight Networks
    Gu Xiaowei
    Hui, Tian
    Dai Zhongjian
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 1726 - 1730
  • [46] Lightweight Spectrum Prediction Based on Knowledge Distillation
    Cheng, Runmeng
    Zhang, Jianzhao
    Deng, Junquan
    Zhu, Yanping
    RADIOENGINEERING, 2023, 32 (04) : 469 - 478
  • [47] Multidomain Object Detection Framework Using Feature Domain Knowledge Distillation
    Jaw, Da-Wei
    Huang, Shih-Chia
    Lu, Zhi-Hui
    Fung, Benjamin C. M.
    Kuo, Sy-Yen
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (08) : 4643 - 4651
  • [48] Unsupervised domain adaptation for lip reading based on cross-modal knowledge distillation
    Takashima, Yuki
    Takashima, Ryoichi
    Tsunoda, Ryota
    Aihara, Ryo
    Takiguchi, Tetsuya
    Ariki, Yasuo
    Motoyama, Nobuaki
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2021, 2021 (01)
  • [49] Cycle Class Consistency with Distributional Optimal Transport and Knowledge Distillation for Unsupervised Domain Adaptation
    Tuan Nguyen
    Van Nguyen
    Trung Le
    Zhao, He
    Quan Hung Tran
    Dinh Phung
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 1519 - 1529
  • [50] Unsupervised domain adaptation for lip reading based on cross-modal knowledge distillation
    Yuki Takashima
    Ryoichi Takashima
    Ryota Tsunoda
    Ryo Aihara
    Tetsuya Takiguchi
    Yasuo Ariki
    Nobuaki Motoyama
    EURASIP Journal on Audio, Speech, and Music Processing, 2021