Domain-Invariant Label Propagation With Adaptive Graph Regularization

被引:0
|
作者
Zhang, Yanning [1 ]
Tao, Jianwen [1 ]
Yan, Liangda [2 ]
机构
[1] Ningbo Polytech, Inst Artificial Intelligence Applicat, Ningbo 315800, Peoples R China
[2] Zhejiang Business Technol Inst, Sch Elect Informat, Ningbo 315012, Zhejiang, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Adaptation models; Optimization; Deep learning; Representation learning; Training; Knowledge transfer; Upper bound; Robustness; Predictive models; Noise measurement; Domain adaptation; maximum mean discrepancy; adaptive graph Laplacian; label propagation;
D O I
10.1109/ACCESS.2024.3510889
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As an effective machine learning paradigm, domain adaptation (DA) learning aims to enhance the learning performance of the target domain by utilizing other relevant but distinct domain(s) (referred to as the source domain(s)). The existing mainstream methods for DA mainly learn discriminative domain-invariant feature representations by combining the "pseudo labels" of the target domain to better achieve knowledge transfer. However, most existing methods alternate the optimization learning of domain-invariant features and the updating of the "pseudo labels" into two different stages, which makes them difficult to achieve optimal learning performance. In order to achieve joint optimization learning of updating the "pseudo labels" and domain-invariant feature representations, a framework of Domain-Invariant Label prOpagation (DILO) with adaptive graph regularization is proposed. By combining semi-supervised knowledge adaptation and label propagation on domain data, DILO jointly optimizes domain-invariant feature representations and target learning tasks in a unified framework, allowing these two objectives to mutually benefit. Specifically, by introducing the concept of soft labels, a joint distribution measurement model is established to simultaneously alleviate both marginal and conditional distribution differences between different domains; constructing an adaptive probability graph model to enhance the robustness of label propagation. Moreover, a robust sigma -norm is applied to domain joint distribution measurement and inductive learning models to form a unified objective optimization formulation. An effective optimization algorithm is proposed for addressing the optimization problem of DILO. Compared with several representative DA methods, the proposed method achieved better or comparable robustness in adaptation learning on four cross-domain visual datasets.
引用
收藏
页码:190728 / 190745
页数:18
相关论文
共 50 条
  • [41] Learning Domain-Invariant Subspace Using Domain Features and Independence Maximization
    Yan, Ke
    Kou, Lu
    Zhang, David
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (01) : 288 - 299
  • [42] DOMAIN-INVARIANT REPRESENTATION LEARNING FROM EEG WITH PRIVATE ENCODERS
    Bethge, David
    Hallgarten, Philipp
    Grosse-Puppendahl, Tobias
    Kari, Mohamed
    Mikut, Ralf
    Schmidt, Albrecht
    Oezdenizci, Ozan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1236 - 1240
  • [43] Learning Domain-Invariant Discriminative Features for Heterogeneous Face Recognition
    Yang, Shanmin
    Fu, Keren
    Yang, Xiao
    Lin, Ye
    Zhang, Jianwei
    Peng, Cheng
    IEEE ACCESS, 2020, 8 : 209790 - 209801
  • [44] Knowledge Distillation-Based Domain-Invariant Representation Learning for Domain Generalization
    Niu, Ziwei
    Yuan, Junkun
    Ma, Xu
    Xu, Yingying
    Liu, Jing
    Chen, Yen-Wei
    Tong, Ruofeng
    Lin, Lanfen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 245 - 255
  • [45] On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources
    Trung Phung
    Trung Le
    Long Vuong
    Toan Tran
    Anh Tran
    Bui, Hung
    Dinh Phung
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [46] DIVIDE: Learning a Domain-Invariant Geometric Space for Depth Estimation
    Shim, Dongseok
    Kim, H. Jin
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (05) : 4663 - 4670
  • [47] DIRL: Domain-Invariant Representation Learning for Generalizable Semantic Segmentation
    Xu, Qi
    Yao, Liang
    Jiang, Zhengkai
    Jiang, Guannan
    Chu, Wenqing
    Han, Wenhui
    Zhang, Wei
    Wang, Chengjie
    Tai, Ying
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2884 - 2892
  • [48] Domain-Invariant Feature Learning for General Face Forgery Detection
    Zhang, Jian
    Ni, Jiangqun
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2321 - 2326
  • [49] Learning List-Level Domain-Invariant Representations for Ranking
    Xian, Ruicheng
    Zhuang, Honglei
    Qin, Zhen
    Zamani, Hamed
    Lu, Jing
    Ma, Ji
    Hui, Kai
    Zhao, Han
    Wang, Xuanhui
    Bendersky, Michael
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] Domain-Invariant Feature Alignment Using Variational Inference For Partial Domain Adaptation
    Choudhuri, Sandipan
    Adeniye, Suli
    Sen, Arunabha
    Venkateswara, Hemanth
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 349 - 355