SRoUDA: Meta Self-Training for Robust Unsupervised Domain Adaptation

被引:0
|
作者
Zhu, Wanqing [1 ,2 ]
Yin, Jia-Li [1 ,2 ]
Chen, Bo-Hao [3 ]
Liu, Ximeng [1 ,2 ]
机构
[1] Fujian Prov Key Lab Informat Secur & Network Syst, Fuzhou 350108, Peoples R China
[2] Fuzhou Univ, Coll Comp Sci & Big Data, Fuzhou 350108, Peoples R China
[3] Yuan Ze Univ, Dept Comp Sci & Engn, Taoyuan, Taiwan
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As acquiring manual labels on data could be costly, unsupervised domain adaptation (UDA), which transfers knowledge learned from a rich-label dataset to the unlabeled target dataset, is gaining increasingly more popularity. While extensive studies have been devoted to improving the model accuracy on target domain, an important issue of model robustness is neglected. To make things worse, conventional adversarial training (AT) methods for improving model robustness are inapplicable under UDA scenario since they train models on adversarial examples that are generated by supervised loss function. In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models. Based on self-training paradigm, SRoUDA starts with pre-training a source model by applying UDA baseline on source labeled data and taraget unlabeled data with a developed random masked augmentation (RMA), and then alternates between adversarial target model training on pseudo-labeled target data and fine-tuning source model by a meta step. While self-training allows the direct incorporation of AT in UDA, the meta step in SRoUDA further helps in mitigating error propagation from noisy pseudo labels. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SRoUDA where it achieves significant model robustness improvement without harming clean accuracy.
引用
收藏
页码:3852 / 3860
页数:9
相关论文
共 50 条
  • [31] Improve conditional adversarial domain adaptation using self-training
    Wang, Zi
    Sun, Xiaoliang
    Su, Ang
    Wang, Gang
    Li, Yang
    Yu, Qifeng
    IET IMAGE PROCESSING, 2021, 15 (10) : 2169 - 2178
  • [32] Self-training transformer for source-free domain adaptation
    Guanglei Yang
    Zhun Zhong
    Mingli Ding
    Nicu Sebe
    Elisa Ricci
    Applied Intelligence, 2023, 53 : 16560 - 16574
  • [33] Self-training transformer for source-free domain adaptation
    Yang, Guanglei
    Zhong, Zhun
    Ding, Mingli
    Sebe, Nicu
    Ricci, Elisa
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16560 - 16574
  • [34] Self-training Guided Adversarial Domain Adaptation For Thermal Imagery
    Akkaya, Ibrahim Batuhan
    Altinel, Fazil
    Halici, Ugur
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 4317 - 4326
  • [35] Domain Adaptation in Human Activity Recognition through Self-Training
    Al Kfari, Moh'd Khier
    Luedtke, Stefan
    COMPANION OF THE 2024 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, UBICOMP COMPANION 2024, 2024, : 897 - 903
  • [36] Manifold-Aware Self-Training for Unsupervised Domain Adaptation on Regressing 6D Object Pose
    Zhang, Yichen
    Lin, Jiehong
    Chen, Ke
    Xu, Zelin
    Wang, Yaowei
    Jia, Kui
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1740 - 1748
  • [37] ST3D: Self-training for Unsupervised Domain Adaptation on 3D Object Detection
    Yang, Jihan
    Shi, Shaoshuai
    Wang, Zhe
    Li, Hongsheng
    Qi, Xiaojuan
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10363 - 10373
  • [38] Unsupervised Controllable Generation with Self-Training
    Chrysos, Grigorios G.
    Kossaifi, Jean
    Yu, Zhiding
    Anandkumar, Anima
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [39] Self-Training for Unsupervised Parsing with PRPN
    Mohananey, Anhad
    Kann, Katharina
    Bowman, Samuel R.
    16TH INTERNATIONAL CONFERENCE ON PARSING TECHNOLOGIES AND IWPT 2020 SHARED TASK ON PARSING INTO ENHANCED UNIVERSAL DEPENDENCIES, 2020, : 105 - 110
  • [40] Doubly Robust Self-Training
    Zhu, Banghua
    Ding, Mingyu
    Jacobson, Philip
    Wu, Ming
    Zhan, Wei
    Jordan, Michael I.
    Jiao, Jiantao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,