Addressing the Overfitting in Partial Domain Adaptation With Self-Training and Contrastive Learning

被引:9
|
作者
He, Chunmei [1 ,2 ]
Li, Xiuguang [1 ,2 ]
Xia, Yue [1 ,2 ]
Tang, Jing [1 ,2 ]
Yang, Jie [1 ,2 ]
Ye, Zhengchun [3 ]
机构
[1] Xiangtan Univ, Sch Comp Sci, Xiangtan 411105, Hunan, Peoples R China
[2] Xiangtan Univ, Sch Cyberspace Sci, Xiangtan 411105, Hunan, Peoples R China
[3] Xiangtan Univ, Sch Mech Engn, Xiangtan 411105, Hunan, Peoples R China
关键词
Entropy; Feature extraction; Reliability; Adaptation models; Training; Cyberspace; Computer science; Transfer learning; partial domain adaptation; deep neural network; image classification; contrastive learning;
D O I
10.1109/TCSVT.2023.3296617
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Partial domain adaptation (PDA) assumes that target domain class label set is a subset of that of source domain, while this problem setting is close to the actual scenario. At present, there are mainly two methods to solve the overfitting of source domain in PDA, namely the entropy minimization and the weighted self-training. However, the entropy minimization method may make the distribution prediction sharp but inaccurate for samples with relatively average prediction distribution, and cause the model to learn more error information. While the weighted self-training method will introduce erroneous noise information in the self-training process due to the existence of noise weights. Therefore, we address these issues in our work and propose self-training contrastive partial domain adaptation method (STCPDA). We present two modules to mine domain information in STCPDA. We first design self-training module based on simple samples in target domain to address the overfitting to source domain. We divide the target domain samples into simple samples with high reliability and difficult samples with low reliability, and the pseudo-labels of simple samples are selected for self-training learning. Then we construct the contrastive learning module for source and target domains. We embed contrastive learning into feature space of the two domains. By this contrastive learning module, we can fully explore the hidden information in all domain samples and make the class boundary more salient. Many experimental results on five datasets show the effectiveness and excellent classification performance of our method.
引用
收藏
页码:1532 / 1545
页数:14
相关论文
共 50 条
  • [41] Single slice thigh CT muscle group segmentation with domain adaptation and self-training
    Yang, Qi
    Yu, Xin
    Lee, Ho Hin
    Cai, Leon Y.
    Xu, Kaiwen
    Bao, Shunxing
    Huo, Yuankai
    Moore, Ann Zenobia
    Makrogiannis, Sokratis
    Ferrucci, Luigi
    Landman, Bennett A.
    JOURNAL OF MEDICAL IMAGING, 2023, 10 (04)
  • [42] ONDA-DETR: ONLINE DOMAIN ADAPTATION FOR DETECTION TRANSFORMERS WITH SELF-TRAINING FRAMEWORK
    Suzuki, Satoshi
    Yamane, Taiga
    Makishima, Naoki
    Suzuki, Keita
    Ando, Atsushi
    Masumura, Ryo
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1780 - 1784
  • [43] Geometry-Aware Self-Training for Unsupervised Domain Adaptation on Object Point Clouds
    Zou, Longkun
    Tang, Hui
    Chen, Ke
    Jia, Kui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6383 - 6392
  • [44] Unsupervised Domain Adaptation for Medical Image Segmentation via Self-Training of Early Features
    Sheikh, Rasha
    Schultz, Thomas
    INTERNATIONAL CONFERENCE ON MEDICAL IMAGING WITH DEEP LEARNING, VOL 172, 2022, 172 : 1096 - 1107
  • [45] Domain Adaptation for Medical Image Segmentation Using Transformation-Invariant Self-training
    Ghamsarian, Negin
    Tejero, Javier Gamazo
    Marquez-Neila, Pablo
    Wolf, Sebastian
    Zinkernagel, Martin
    Schoeffmann, Klaus
    Sznitman, Raphael
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT I, 2023, 14220 : 331 - 341
  • [46] Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-training
    Zou, Yang
    Yu, Zhiding
    Kumar, B. V. K. Vijaya
    Wang, Jinsong
    COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 : 297 - 313
  • [47] DAST: Unsupervised Domain Adaptation in Semantic Segmentation Based on Discriminator Attention and Self-Training
    Yu, Fei
    Zhang, Mo
    Dong, Hexin
    Hu, Sheng
    Dong, Bin
    Zhang, Li
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10754 - 10762
  • [48] Improving Self-training for Cross-lingual Named Entity Recognition with Contrastive and Prototype Learning
    Zhou, Ran
    Li, Xin
    Bing, Lidong
    Cambria, Erik
    Miao, Chunyan
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 4018 - 4031
  • [49] Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval
    Kulshreshtha, Devang
    Belfer, Robert
    Serban, Iulian Vlad
    Reddy, Siva
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 7064 - 7078
  • [50] Saliency Regularization for Self-Training with Partial Annotations
    Wang, Shouwen
    Wan, Qian
    Xiang, Xiang
    Zeng, Zhigang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1611 - 1620