A relation-enhanced mean-teacher framework for source-free domain adaptation of object detection

被引:0
|
作者
Tian, Dingqing [1 ]
Xu, Changbo [1 ]
Cao, Shaozhong [1 ]
机构
[1] Beijing Inst Graph Commun, 1 band 2,Xinghua St, Beijing 102600, Peoples R China
关键词
Source-free domain adaptation object detection; Graph neural network; Mean-teacher;
D O I
10.1016/j.aej.2024.12.051
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Source-Free Domain Adaptation Object Detection (SF-DAOD) is a challenging task in the field of computer vision. This task is used when the source-domain dataset is not accessible. In existing work, three serious issues are not solved: (1) Information on the semantic topological structure among instances is overlooked. (2) In the training process, attention is focused solely on a single domain, without considering the interaction of information between domains. (3) Low-quality pseudo-labels can degrade the training effectiveness. In this paper, we propose a Relation-Enhanced Mean-Teacher (RMT) Framework utilizing graph neural networks to address these issues. We build the graph structure using the semantic topological structure and the location information, and we employ a Graph-Guided Feature Fusion (GFF) network to achieve alignment between the source and target domains. Furthermore, we utilize these features and the graph to construct a Graph- Guide Bidirectional Verification (GBV) to select high-quality pseudo-labels for supervision. Our experiments on four domain shift scenarios with six standard benchmark datasets demonstrate that our approach outperforms various existing state-of-the-art domain adaptation methods.
引用
收藏
页码:439 / 450
页数:12
相关论文
共 50 条
  • [31] Simplifying Source-Free Domain Adaptation for Object Detection: Effective Self-training Strategies and Performance Insights
    Hao, Yan
    Forest, Florent
    Fink, Olga
    COMPUTER VISION - ECCV 2024, PT LIV, 2025, 15112 : 196 - 213
  • [32] Run and Chase: Towards Accurate Source-Free Domain Adaptive Object Detection
    Lin, Luojun
    Yang, Zhifeng
    Liu, Qipeng
    Yu, Yuanlong
    Lin, Qifeng
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2453 - 2458
  • [33] An adaptive source-free unsupervised domain adaptation method for mechanical fault detection
    Liu, Jianing
    Cao, Hongrui
    Dhupia, Jaspreet Singh
    Choudhury, Madhurjya Dev
    Fu, Yang
    Chen, Siwen
    Li, Jinhui
    Yv, Bin
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2025, 228
  • [34] Exploring Object Relation in Mean Teacher for Cross-Domain Detection
    Cai, Qi
    Pan, Yingwei
    Ngo, Chong-Wah
    Tian, Xinmei
    Duan, Lingyu
    Yao, Ting
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11449 - 11458
  • [35] Uncertainty-Guided Source-Free Domain Adaptation
    Roy, Subhankar
    Trapp, Martin
    Pilzer, Andrea
    Kannala, Juho
    Sebe, Nicu
    Ricci, Elisa
    Solin, Arno
    COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 537 - 555
  • [36] Consistency Regularization for Generalizable Source-free Domain Adaptation
    Tang, Longxiang
    Li, Kai
    He, Chunming
    Zhang, Yulun
    Li, Xiu
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 4325 - 4335
  • [37] Exploring Relational Knowledge for Source-Free Domain Adaptation
    Ma, You
    Chai, Lin
    Tu, Shi
    Wang, Qingling
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (02) : 1825 - 1839
  • [38] Source-Free Domain Adaptation via Distribution Estimation
    Ding, Ning
    Xu, Yixing
    Tang, Yehui
    Xu, Chao
    Wang, Yunhe
    Tao, Dacheng
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7202 - 7212
  • [39] Source-free domain adaptation with Class Prototype Discovery
    Zhou, Lihua
    Li, Nianxin
    Ye, Mao
    Zhu, Xiatian
    Tang, Song
    PATTERN RECOGNITION, 2023, 145
  • [40] Source-Free Implicit Semantic Augmentation for Domain Adaptation
    Zhang, Zheyuan
    Zhang, Zili
    PRICAI 2022: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2022, 13630 : 17 - 31