Learning Interpretable Representations with Informative Entanglements

被引:0
|
作者
Beyazit, Ege [1 ]
Tuncel, Doruk [2 ]
Yuan, Xu [1 ]
Tzeng, Nian-Feng [1 ]
Wu, Xindong [3 ]
机构
[1] Univ Louisiana Lafayette, Lafayette, LA 70504 USA
[2] Johannes Kepler Univ Linz, Linz, Austria
[3] Mininglamp Acad Sci, Beijing, Peoples R China
来源
PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2020年
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning interpretable representations in an unsupervised setting is an important yet a challenging task. Existing unsupervised interpretable methods focus on extracting independent salient features from data. However they miss out the fact that the entanglement of salient features may also be informative. Acknowledging these entanglements can improve the interpretability, resulting in extraction of higher quality and a wider variety of salient features. In this paper, we propose a new method to enable Generative Adversarial Networks (GANs) to discover salient features that may be entangled in an informative manner, instead of extracting only disentangled features. Specifically, we propose a regularizer to punish the disagreement between the extracted feature interactions and a given dependency structure while training. We model these interactions using a Bayesian network, estimate the maximum likelihood parameters and calculate a negative likelihood score to measure the disagreement. Upon qualitatively and quantitatively evaluating the proposed method using both synthetic and real-world datasets, we show that our proposed regularizer guides GANs to learn representations with disentanglement scores competing with the state-of-the-art, while extracting a wider variety of salient features.
引用
收藏
页码:1970 / 1976
页数:7
相关论文
共 50 条
  • [1] Lifelong learning of interpretable image representations
    Ye, Fei
    Bors, Adrian G.
    2020 TENTH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA), 2020,
  • [2] InfoUCL: Learning Informative Representations for Unsupervised Continual Learning
    Zhang, Liang
    Zhao, Jiangwei
    Wu, Qingbo
    Pan, Lili
    Li, Hongliang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 10779 - 10791
  • [3] Learning Transferrable and Interpretable Representations for Domain Generalization
    Du, Zhekai
    Li, Jingjing
    Lu, Ke
    Zhu, Lei
    Huang, Zi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 3340 - 3349
  • [4] INTERPRETABLE MACHINE LEARNING Mining for informative signals in biological sequences
    Alaa, Ahmed M.
    NATURE MACHINE INTELLIGENCE, 2022, 4 (08) : 665 - 666
  • [5] Interpretable and Informative Explanations of Outcomes
    El Gebaly, Kareem
    Agrawal, Parag
    Golab, Lukasz
    Korn, Flip
    Srivastava, Divesh
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2014, 8 (01): : 61 - 72
  • [6] Decontextualized learning for interpretable hierarchical representations of visual patterns
    Etheredge, Robert Ian
    Schartl, Manfred
    Jordan, Alex
    PATTERNS, 2021, 2 (02):
  • [7] Inverse Decision Modeling: Learning Interpretable Representations of Behavior
    Jarrett, Daniel
    Huyuk, Alihan
    van der Schaar, Mihaela
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [8] Learning Interpretable Disentangled Representations Using Adversarial VAEs
    Sarhan, Mhd Hasan
    Eslami, Abouzar
    Navab, Nassir
    Albarqouni, Shadi
    DOMAIN ADAPTATION AND REPRESENTATION TRANSFER AND MEDICAL IMAGE LEARNING WITH LESS LABELS AND IMPERFECT DATA, DART 2019, MIL3ID 2019, 2019, 11795 : 37 - 44
  • [9] Interpretable molecular encodings and representations for machine learning tasks
    Weckbecker, Moritz
    Anzela, Aleksandar
    Yang, Zewen
    Hattab, Georgesm
    COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL, 2024, 23 : 2326 - 2336
  • [10] Selective information enhancement learning for creating interpretable representations in competitive learning
    Kamimura, Ryotaro
    NEURAL NETWORKS, 2011, 24 (04) : 387 - 405