Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms

被引:0
|
作者
Komanduri, Aneesh [1 ]
Wu, Yongkai [2 ]
Chen, Feng [3 ]
Wu, Xintao [1 ]
机构
[1] Univ Arkansas, Fayetteville, AR 72701 USA
[2] Clemson Univ, Clemson, SC USA
[3] Univ Texas Dallas, Richardson, TX 75083 USA
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
COMPONENT ANALYSIS;
D O I
暂无
中图分类号
学科分类号
摘要
Learning disentangled causal representations is a challenging problem that has gained significant attention recently due to its implications for extracting meaningful information for downstream tasks. In this work, we define a new notion of causal disentanglement from the perspective of independent causal mechanisms. We propose ICM-VAE, a framework for learning causally disentangled representations supervised by causally related observed labels. We model causal mechanisms using nonlinear learnable flow-based diffeomorphic functions to map noise variables to latent causal variables. Further, to promote the disentanglement of causal factors, we propose a causal disentanglement prior learned from auxiliary labels and the latent causal structure. We theoretically show the identifiability of causal factors and mechanisms up to permutation and elementwise reparameterization. We empirically demonstrate that our framework induces highly disentangled causal factors, improves interventional robustness, and is compatible with counterfactual generation.
引用
收藏
页码:4308 / 4316
页数:9
相关论文
共 50 条
  • [1] Learning Disentangled Representations via Independent Subspaces
    Awiszus, Maren
    Ackermann, Hanno
    Rosenhahn, Bodo
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 560 - 568
  • [2] On Causally Disentangled Representations
    Reddy, Abbavaram Gowtham
    Benin, Godfrey L.
    Balasubramanian, Vineeth N.
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8089 - 8097
  • [3] Disentangled representations for causal cognition
    Torresan, Filippo
    Baltieri, Manuel
    PHYSICS OF LIFE REVIEWS, 2024, 51 : 343 - 381
  • [4] Independent Subspace Analysis for Unsupervised Learning of Disentangled Representations
    Stuhmer, Jan
    Turner, Richard E.
    Nowozin, Sebastian
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108
  • [5] Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness
    Suter, Raphael
    Miladinovic, Dorde
    Schoelkopf, Bernhard
    Bauer, Stefan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [6] Learning Independent Causal Mechanisms
    Parascandolo, Giambattista
    Kilbertus, Niki
    Rojas-Carulla, Mateo
    Scholkopf, Bernhard
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [7] Learning disentangled representations via product manifold projection
    Fumero, Marco
    Cosmo, Luca
    Melzi, Simone
    Rodola, Emanuele
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [8] Speaker-Independent Emotional Voice Conversion via Disentangled Representations
    Chen, Xunquan
    Xu, Xuexin
    Chen, Jinhui
    Zhang, Zhizhong
    Takiguchi, Tetsuya
    Hancock, Edwin R.
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 7480 - 7493
  • [9] Learning Disentangled Textual Representations via Statistical Measures of Similarity
    Colombo, Pierre
    Staerman, Guillaume
    Noiry, Nathan
    Piantanida, Pablo
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 2614 - 2630
  • [10] Learning Disentangled Representations for Recommendation
    Ma, Jianxin
    Zhou, Chang
    Cui, Peng
    Yang, Hongxia
    Zhu, Wenwu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32