Gradient Adjusted and Weight Rectified Mean Teacher for Source-Free Object Detection

被引:0
|
作者
Peng, Jiawen [1 ]
Chen, Jiaxin [1 ]
Hu, Yanxu [1 ]
Pan, Rong [1 ]
Ma, Andy J. [1 ,2 ,3 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[2] Guangdong Prov Key Lab Informat Secur Technol, Guangzhou, Peoples R China
[3] Minist Educ, Key Lab Machine Intelligence & Adv Comp, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Source-free Object Detection; Negative Gradient Adjustment; Source Weight Rectification;
D O I
10.1007/978-3-031-44195-0_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Source-free object detection (SFOD) aims at adapting object detectors to the unlabeled target domain without access to the labeled source domain. Recent SFOD methods are developed based on the Mean Teacher framework, which consists of a student and a teacher model for self-training. Despite the great success, existing methods suffer from the challenges of missing detections and fitting to incorrect pseudo labels. To overcome these challenges, we propose a Gradient Adjusted and Weight Rectified Mean Teacher framework with two novel training strategies for SFOD, i.e., Negative Gradient Adjustment (NGA) and Source Weight Rectification (SWR). The proposed Negative Gradient Adjustment suppresses the negative gradients caused by missing detections, while the Source Weight Rectification enhances the robustness by rectifying errors of pseudo labels. Additionally, weak-strong consistency data augmentation is introduced for stronger detector performance. Extensive experiments on four benchmarks demonstrate that our proposed method outperforms the existing works for SFOD.
引用
收藏
页码:100 / 111
页数:12
相关论文
共 50 条
  • [1] Balanced Teacher for Source-Free Object Detection
    Deng, Jinhong
    Li, Wen
    Duan, Lixin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 7231 - 7243
  • [2] Dynamic Retraining-Updating Mean Teacher for Source-Free Object Detection
    Khanh, Trinh Le Ba
    Huy-Hung Nguyen
    Long Hoang Pham
    Duong Nguyen-Ngoc Tran
    Jeon, Jae Wook
    COMPUTER VISION - ECCV 2024, PT LII, 2025, 15110 : 328 - 344
  • [3] Source-free domain adaptive object detection based on pseudo-supervised mean teacher
    Wei, Xing
    Bai, Ting
    Zhai, Yan
    Chen, Lei
    Luo, Hui
    Zhao, Chong
    Lu, Yang
    JOURNAL OF SUPERCOMPUTING, 2023, 79 (06): : 6228 - 6251
  • [4] Source-free domain adaptive object detection based on pseudo-supervised mean teacher
    Xing Wei
    Ting Bai
    Yan Zhai
    Lei Chen
    Hui Luo
    Chong Zhao
    Yang Lu
    The Journal of Supercomputing, 2023, 79 : 6228 - 6251
  • [5] A relation-enhanced mean-teacher framework for source-free domain adaptation of object detection
    Tian, Dingqing
    Xu, Changbo
    Cao, Shaozhong
    ALEXANDRIA ENGINEERING JOURNAL, 2025, 116 : 439 - 450
  • [6] MIXTURE OF TEACHER EXPERTS FOR SOURCE-FREE DOMAIN ADAPTIVE OBJECT DETECTION
    Vibashan, V. S.
    Oza, Poojan
    Sindagi, Vishwanath A.
    Patel, Vishal M.
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3606 - 3610
  • [7] Periodically Exchange Teacher-Student for Source-Free Object Detection
    Liu, Qipeng
    Lin, Luojun
    Shen, Zhifeng
    Yang, Zhifeng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 6391 - 6401
  • [8] Decoupled Unbiased Teacher for Source-Free Domain Adaptive Medical Object Detection
    Liu, Xinyu
    Li, Wuyang
    Yuan, Yixuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7287 - 7298
  • [9] Source-Free Object Detection by Learning to Overlook Domain Style
    Li, Shuaifeng
    Ye, Mao
    Zhu, Xiatian
    Zhou, Lihua
    Xiong, Lin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 8004 - 8013
  • [10] Run and Chase: Towards Accurate Source-Free Domain Adaptive Object Detection
    Lin, Luojun
    Yang, Zhifeng
    Liu, Qipeng
    Yu, Yuanlong
    Lin, Qifeng
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2453 - 2458