Unbiased Semantic Representation Learning Based on Causal Disentanglement for Domain Generalization

被引:0
|
作者
Jin, Xuanyu [1 ,2 ]
Li, Ni [1 ,2 ]
Kong, Wangzeng [1 ,2 ]
Tang, Jiajia [1 ,2 ]
Yang, Bing [1 ,2 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci, Hangzhou, Peoples R China
[2] Key Lab Brain Machine Collaborat Intelligence Zhej, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Transfer learning; domain generalization; disentangled representation; causal intervention; semantic representation; INFERENCE;
D O I
10.1145/3659953
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Domain generalization primarily mitigates domain shift among multiple source domains, generalizing the trained model to an unseen target domain. However, the spurious correlation usually caused by context prior (e.g., background) makes it challenging to get rid of the domain shift. Therefore, it is critical to model the intrinsic causal mechanism. The existing domain generalization methods only attend to disentangle the semantic and context-related features by modeling the causation between input and labels, which totally ignores the unidentifiable but important confounders. In this article, a Causal Disentangled Intervention Model (CDIM) is proposed for the first time, to the best of our knowledge, to construct confounders via causal intervention. Specifically, a generative model is employed to disentangle the semantic and context-related features. The contextual information of each domain from generative model can be considered as a confounder layer, and the center of all context-related features is utilized for fine-grained hierarchical modeling of confounders. Then the semantic and confounding features from each layer are combined to train an unbiased classifier, which exhibits both transferability and robustness across an unknown distribution domain. CDIM is evaluated on three widely recognized benchmark datasets, namely, Digit-DG, PACS, and NICO, through extensive ablation studies. The experimental results clearly demonstrate that the proposed model achieves state-of-the-art performance.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] On the benefits of representation regularization in invariance based domain generalization
    Changjian Shui
    Boyu Wang
    Christian Gagné
    Machine Learning, 2022, 111 : 895 - 915
  • [42] Script Event Prediction Based on Causal Generalization Learning
    Pang, Tianfu
    Mao, Yingchi
    Ding, Silong
    Wang, Biao
    Qi, Rongzhi
    2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2023, : 865 - 872
  • [43] Unbiased Learning for the Causal Effect of Recommendation
    Sato, Masahiro
    Takemori, Sho
    Singh, Janmajay
    Ohkuma, Tomoko
    RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, : 378 - 387
  • [44] Domain Generalization via Model-Agnostic Learning of Semantic Features
    Dou, Qi
    Castro, Daniel C.
    Kamnitsas, Konstantinos
    Glocker, Ben
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [45] Domain Generalization using Causal Matching
    Mahajan, Divyat
    Tople, Shruti
    Sharma, Amit
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [46] Deep discriminative causal domain generalization
    Li, Shanshan
    Zhao, Qingjie
    Zhang, Changchun
    Zou, Yuanbing
    INFORMATION SCIENCES, 2023, 645
  • [47] Taking a Closer Look at Factor Disentanglement: Dual-Path Variational Autoencoder Learning for Domain Generalization
    Luo, Ying
    Kang, Guoliang
    Liu, Kexin
    Zhuang, Fuzhen
    Lu, Jinhu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5872 - 5883
  • [48] Single domain generalization method based on anti-causal learning for rotating machinery fault diagnosis
    Zhang, Guowei
    Kong, Xianguang
    Wang, Qibin
    Du, Jingli
    Wang, Jinrui
    Ma, Hongbo
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2024, 250
  • [49] Diffusion-Based Causal Representation Learning
    Mamaghan, Amir Mohammad Karimi
    Dittadi, Andrea
    Bauer, Stefan
    Johansson, Karl Henrik
    Quinzan, Francesco
    ENTROPY, 2024, 26 (07)
  • [50] Deciphering the Role of Representation Disentanglement: Investigating Compositional Generalization in CLIP Models
    Abbasi, Reza
    Rohban, Mohammad Hossein
    Baghshah, Mahdieh Soleymani
    COMPUTER VISION - ECCV 2024, PT LXXXIX, 2025, 15147 : 35 - 50