Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization

被引:0
|
作者
Grover, Aditya [1 ]
Ermon, Stefano [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Compressed sensing techniques enable efficient acquisition and recovery of sparse, high-dimensional data signals via low-dimensional projections. In this work, we propose Uncertainty Autoencoders, a learning framework for unsupervised representation learning inspired by compressed sensing. We treat the low-dimensional projections as noisy latent representations of an autoencoder and directly learn both the acquisition (i.e., encoding) and amortized recovery (i.e., decoding) procedures. Our learning objective optimizes for a tractable variational lower bound to the mutual information between the datapoints and the latent representations. We show how our framework provides a unified treatment to several lines of research in dimensionality reduction, compressed sensing, and generative modeling. Empirically, we demonstrate a 32% improvement on average over competing approaches for the task of statistical compressed sensing of high-dimensional datasets.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Variational autoencoders learn transferrable representations of metabolomics data
    Gomari, Daniel P.
    Schweickart, Annalise
    Cerchietti, Leandro
    Paietta, Elisabeth
    Fernandez, Hugo
    Al-Amin, Hassen
    Suhre, Karsten
    Krumsiek, Jan
    COMMUNICATIONS BIOLOGY, 2022, 5 (01)
  • [22] Variational autoencoders learn transferrable representations of metabolomics data
    Daniel P. Gomari
    Annalise Schweickart
    Leandro Cerchietti
    Elisabeth Paietta
    Hugo Fernandez
    Hassen Al-Amin
    Karsten Suhre
    Jan Krumsiek
    Communications Biology, 5
  • [23] Learning Fair Representations via Rate-Distortion Maximization
    Chowdhury, Somnath Basu Roy
    Chaturvedi, Snigdha
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 1159 - 1174
  • [24] Learning Latent Subspaces in Variational Autoencoders
    Klys, Jack
    Snell, Jake
    Zemel, Richard
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [25] Learning Grounded Meaning Representations with Autoencoders
    Silberer, Carina
    Lapata, Mirella
    PROCEEDINGS OF THE 52ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2014, : 721 - 732
  • [26] Variational Domain Adversarial Learning With Mutual Information Maximization for Speaker Verification
    Tu, Youzhi
    Mak, Man-Wai
    Chien, Jen-Tzung
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 : 2013 - 2024
  • [27] Solving deep-learning density functional theory via variational autoencoders
    Costa, Emanuele
    Scriva, Giuseppe
    Pilati, Sebastiano
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (03):
  • [28] Laplacian Autoencoders for Learning Stochastic Representations
    Miani, Marco
    Warburg, Frederik
    Moreno-Munoz, Pablo
    Detlefsen, Nicke Skafte
    Hauberg, Soren
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [29] INFOVAEGAN : LEARNING JOINT INTERPRETABLE REPRESENTATIONS BY INFORMATION MAXIMIZATION AND MAXIMUM LIKELIHOOD
    Ye, Fei
    Bors, Adrian G.
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 749 - 753
  • [30] Creating Latent Representations of Synthesizer Patches using Variational Autoencoders
    Peachey, Matthew
    Oore, Sageev
    Malloch, Joseph
    2023 4TH INTERNATIONAL SYMPOSIUM ON THE INTERNET OF SOUNDS, 2023, : 83 - 89