Addressing the Overfitting in Partial Domain Adaptation With Self-Training and Contrastive Learning

被引:9
|
作者
He, Chunmei [1 ,2 ]
Li, Xiuguang [1 ,2 ]
Xia, Yue [1 ,2 ]
Tang, Jing [1 ,2 ]
Yang, Jie [1 ,2 ]
Ye, Zhengchun [3 ]
机构
[1] Xiangtan Univ, Sch Comp Sci, Xiangtan 411105, Hunan, Peoples R China
[2] Xiangtan Univ, Sch Cyberspace Sci, Xiangtan 411105, Hunan, Peoples R China
[3] Xiangtan Univ, Sch Mech Engn, Xiangtan 411105, Hunan, Peoples R China
关键词
Entropy; Feature extraction; Reliability; Adaptation models; Training; Cyberspace; Computer science; Transfer learning; partial domain adaptation; deep neural network; image classification; contrastive learning;
D O I
10.1109/TCSVT.2023.3296617
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Partial domain adaptation (PDA) assumes that target domain class label set is a subset of that of source domain, while this problem setting is close to the actual scenario. At present, there are mainly two methods to solve the overfitting of source domain in PDA, namely the entropy minimization and the weighted self-training. However, the entropy minimization method may make the distribution prediction sharp but inaccurate for samples with relatively average prediction distribution, and cause the model to learn more error information. While the weighted self-training method will introduce erroneous noise information in the self-training process due to the existence of noise weights. Therefore, we address these issues in our work and propose self-training contrastive partial domain adaptation method (STCPDA). We present two modules to mine domain information in STCPDA. We first design self-training module based on simple samples in target domain to address the overfitting to source domain. We divide the target domain samples into simple samples with high reliability and difficult samples with low reliability, and the pseudo-labels of simple samples are selected for self-training learning. Then we construct the contrastive learning module for source and target domains. We embed contrastive learning into feature space of the two domains. By this contrastive learning module, we can fully explore the hidden information in all domain samples and make the class boundary more salient. Many experimental results on five datasets show the effectiveness and excellent classification performance of our method.
引用
收藏
页码:1532 / 1545
页数:14
相关论文
共 50 条
  • [11] Unsupervised Domain Adaptation with Multiple Domain Discriminators and Adaptive Self-Training
    Spadotto, Teo
    Toldo, Marco
    Michieli, Umberto
    Zanuttigh, Pietro
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2845 - 2852
  • [12] Energy-constrained Self-training for Unsupervised Domain Adaptation
    Liu, Xiaofeng
    Hu, Bo
    Liu, Xiongchang
    Lu, Jun
    You, Jane
    Kong, Lingsheng
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7515 - 7520
  • [13] SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation
    Zhu, Wanqing
    Yin, Jia-Li
    Chen, Bo-Hao
    Liu, Ximeng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3852 - 3860
  • [14] Improve conditional adversarial domain adaptation using self-training
    Wang, Zi
    Sun, Xiaoliang
    Su, Ang
    Wang, Gang
    Li, Yang
    Yu, Qifeng
    IET IMAGE PROCESSING, 2021, 15 (10) : 2169 - 2178
  • [15] Self-training transformer for source-free domain adaptation
    Guanglei Yang
    Zhun Zhong
    Mingli Ding
    Nicu Sebe
    Elisa Ricci
    Applied Intelligence, 2023, 53 : 16560 - 16574
  • [16] Self-training transformer for source-free domain adaptation
    Yang, Guanglei
    Zhong, Zhun
    Ding, Mingli
    Sebe, Nicu
    Ricci, Elisa
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16560 - 16574
  • [17] DUAL-CONSISTENCY SELF-TRAINING FOR UNSUPERVISED DOMAIN ADAPTATION
    Wang, Jie
    Zhong, Chaoliang
    Feng, Cheng
    Sun, Jun
    Ide, Masaru
    Yokota, Yasuto
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1529 - 1533
  • [18] Domain Adaptation in Human Activity Recognition through Self-Training
    Al Kfari, Moh'd Khier
    Luedtke, Stefan
    COMPANION OF THE 2024 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, UBICOMP COMPANION 2024, 2024, : 897 - 903
  • [19] Self-training Guided Adversarial Domain Adaptation For Thermal Imagery
    Akkaya, Ibrahim Batuhan
    Altinel, Fazil
    Halici, Ugur
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 4317 - 4326
  • [20] Complementary Benefits of Contrastive Learning and Self-Training Under Distribution Shift
    Garg, Saurabh
    Setlur, Amrith
    Lipton, Zachary C.
    Balakrishnan, Sivaraman
    Smith, Virginia
    Raghunathan, Aditi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,