Towards Unsupervised Domain Adaptation via Domain-Transformer

被引:3
|
作者
Ren, Chuan-Xian [1 ]
Zhai, Yiming [1 ]
Luo, You-Wei [1 ]
Yan, Hong [2 ]
机构
[1] Sun Yat Sen Univ, Sch Math, Xingang Rd, Guangzhou 510275, Guangdong, Peoples R China
[2] City Univ Hong Kong, Dept Elect Engn, Kowloon, 83 Tat Chee Ave, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature learning; Domain adaptation; Discriminative analysis; Attention; Sample correspondence;
D O I
10.1007/s11263-024-02174-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a vital problem in pattern analysis and machine intelligence, Unsupervised Domain Adaptation (UDA) attempts to transfer an effective feature learner from a labeled source domain to an unlabeled target domain. Inspired by the success of the Transformer, several advances in UDA are achieved by adopting pure transformers as network architectures, but such a simple application can only capture patch-level information and lacks interpretability. To address these issues, we propose the Domain-Transformer (DoT) with domain-level attention mechanism to capture the long-range correspondence between the cross-domain samples. On the theoretical side, we provide a mathematical understanding of DoT: (1) We connect the domain-level attention with optimal transport theory, which provides interpretability from Wasserstein geometry; (2) From the perspective of learning theory, Wasserstein distance-based generalization bounds are derived, which explains the effectiveness of DoT for knowledge transfer. On the methodological side, DoT integrates the domain-level attention and manifold structure regularization, which characterize the sample-level information and locality consistency for cross-domain cluster structures. Besides, the domain-level attention mechanism can be used as a plug-and-play module, so DoT can be implemented under different neural network architectures. Instead of explicitly modeling the distribution discrepancy at domain-level or class-level, DoT learns transferable features under the guidance of long-range correspondence, so it is free of pseudo-labels and explicit domain discrepancy optimization. Extensive experiment results on several benchmark datasets validate the effectiveness of DoT.
引用
收藏
页码:6163 / 6183
页数:21
相关论文
共 50 条
  • [1] Unsupervised Domain Adaptation via Bidirectional Cross-Attention Transformer
    Wang, Xiyu
    Guo, Pengxin
    Zhang, Yu
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 309 - 325
  • [2] Bidirectional feature enhancement transformer for unsupervised domain adaptation
    Hao, Zhiwei
    Wang, Shengsheng
    Long, Sifan
    Li, Yiyang
    Chai, Hao
    VISUAL COMPUTER, 2024, 40 (09): : 6261 - 6277
  • [3] TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation
    Yang, Jinyu
    Liu, Jingjing
    Xu, Ning
    Huang, Junzhou
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 520 - 530
  • [4] Unsupervised Domain Adaptation via Domain-Adaptive Diffusion
    Peng, Duo
    Ke, Qiuhong
    Ambikapathi, ArulMurugan
    Yazici, Yasin
    Lei, Yinjie
    Liu, Jun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 4245 - 4260
  • [5] Learning cross-domain representations by vision transformer for unsupervised domain adaptation
    Ye, Yifan
    Fu, Shuai
    Chen, Jing
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (15): : 10847 - 10860
  • [6] Cross-Domain Urban Land Use Classification via Scenewise Unsupervised Multisource Domain Adaptation With Transformer
    Li, Mengmeng
    Zhang, Congcong
    Zhao, Wufan
    Zhou, Wen
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 10051 - 10066
  • [7] Towards Adaptive Multi-Scale Intermediate Domain via Progressive Training for Unsupervised Domain Adaptation
    Zhao, Xian
    Huang, Lei
    Nie, Jie
    Wei, Zhiqiang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5054 - 5064
  • [8] Unsupervised Domain Adaptation via Importance Sampling
    Xu, Xuemiao
    He, Hai
    Zhang, Huaidong
    Xu, Yangyang
    He, Shengfeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (12) : 4688 - 4699
  • [9] Making the Best of Both Worlds: A Domain-Oriented Transformer for Unsupervised Domain Adaptation
    Ma, Wenxuan
    Zhang, Jinming
    Li, Shuang
    Liu, Chi Harold
    Wang, Yulin
    Li, Wei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5620 - 5629
  • [10] UNSUPERVISED DOMAIN ADAPTATION VIA DOMAIN ADVERSARIAL TRAINING FOR SPEAKER RECOGNITION
    Wang, Qing
    Rao, Wei
    Sun, Sining
    Xie, Lei
    Chng, Eng Siong
    Li, Haizhou
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 4889 - 4893