Unbiased Semantic Representation Learning Based on Causal Disentanglement for Domain Generalization

被引:0
|
作者
Jin, Xuanyu [1 ,2 ]
Li, Ni [1 ,2 ]
Kong, Wangzeng [1 ,2 ]
Tang, Jiajia [1 ,2 ]
Yang, Bing [1 ,2 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci, Hangzhou, Peoples R China
[2] Key Lab Brain Machine Collaborat Intelligence Zhej, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Transfer learning; domain generalization; disentangled representation; causal intervention; semantic representation; INFERENCE;
D O I
10.1145/3659953
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Domain generalization primarily mitigates domain shift among multiple source domains, generalizing the trained model to an unseen target domain. However, the spurious correlation usually caused by context prior (e.g., background) makes it challenging to get rid of the domain shift. Therefore, it is critical to model the intrinsic causal mechanism. The existing domain generalization methods only attend to disentangle the semantic and context-related features by modeling the causation between input and labels, which totally ignores the unidentifiable but important confounders. In this article, a Causal Disentangled Intervention Model (CDIM) is proposed for the first time, to the best of our knowledge, to construct confounders via causal intervention. Specifically, a generative model is employed to disentangle the semantic and context-related features. The contextual information of each domain from generative model can be considered as a confounder layer, and the center of all context-related features is utilized for fine-grained hierarchical modeling of confounders. Then the semantic and confounding features from each layer are combined to train an unbiased classifier, which exhibits both transferability and robustness across an unknown distribution domain. CDIM is evaluated on three widely recognized benchmark datasets, namely, Digit-DG, PACS, and NICO, through extensive ablation studies. The experimental results clearly demonstrate that the proposed model achieves state-of-the-art performance.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Meta-learning the invariant representation for domain generalization
    Chen Jia
    Yue Zhang
    Machine Learning, 2024, 113 : 1661 - 1681
  • [22] Meta-learning the invariant representation for domain generalization
    Jia, Chen
    Zhang, Yue
    MACHINE LEARNING, 2024, 113 (04) : 1661 - 1681
  • [23] ICRL: independent causality representation learning for domain generalization
    Xu, Liwen
    Shao, Yuxuan
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [24] Learning Causal Semantic Representation for Out-of-Distribution Prediction
    Liu, Chang
    Sun, Xinwei
    Wang, Jindong
    Tang, Haoyue
    Li, Tao
    Qin, Tao
    Chen, Wei
    Liu, Tie-Yan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [25] Unpaired Multi-Domain Causal Representation Learning
    Sturma, Nils
    Squires, Chandler
    Drton, Mathias
    Uhler, Caroline
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [26] Video Audio Domain Generalization via Confounder Disentanglement
    Zhang, Shengyu
    Feng, Xusheng
    Fan, Wenyan
    Fang, Wenjing
    Feng, Fuli
    Ji, Wei
    Li, Shuo
    Wang, Li
    Zhao, Shanshan
    Zhao, Zhou
    Chua, Tat-Seng
    Wu, Fei
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 15322 - 15330
  • [27] Generalization Bounds and Representation Learning for Estimation of Potential Outcomes and Causal Effects
    Johansson, Fredrik D.
    Shalit, Uri
    Kallus, Nathan
    Sontag, David
    Journal of Machine Learning Research, 2022, 23
  • [28] Generalization Bounds and Representation Learning for Estimation of Potential Outcomes and Causal Effects
    Johansson, Fredrik D.
    Shalit, Uri
    Kallus, Nathan
    Sontag, David
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [29] Learning generalized visual relations for domain generalization semantic segmentation
    Li, Zijun
    Liao, Muxin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 267
  • [30] Flexibly Fair Representation Learning by Disentanglement
    Creager, Elliot
    Madras, David
    Jacobsen, Joern-Henrik
    Weis, Marissa A.
    Swersky, Kevin
    Pitassi, Toniann
    Zemel, Richard
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97