Solving inverse problems via auto-encoders

被引:6
|
作者
Peng P. [1 ]
Jalali S. [2 ]
Yuan X. [2 ]
机构
[1] The Department of Electrical and Computer Engineering, Rutgers University, Piscataway, 08854, NJ
[2] Nokia Bell Labs, Murray Hill, 07974, NJ
来源
Jalali, Shirin (shirin.jalali@nokia-bell-labs.com) | 1600年 / Institute of Electrical and Electronics Engineers Inc.卷 / 01期
关键词
Auto-encoders; Compressed sensing; Deep learning; Generative models; Inverse problems;
D O I
10.1109/JSAIT.2020.2983643
中图分类号
学科分类号
摘要
Compressed sensing (CS) is about recovering a structured signal from its under-determined linear measurements. Starting from sparsity, recovery methods have steadily moved towards more complex structures. Emerging machine learning tools such as generative functions that are based on neural networks are able to learn general complex structures from training data. This makes them potentially powerful tools for designing CS algorithms. Consider a desired class of signals Q, Q ⊂ Rn, and a corresponding generative function g : Uk → Rn, U ⊂ R, such that supx∈Q minu∈Uk (Equation presented). A recovery method based on g seeks g(u) with minimum measurement error. In this paper, the performance of such a recovery method is studied, under both noisy and noiseless measurements. In the noiseless case, roughly speaking, it is proven that, as k and n grow without bound and δ converges to zero, if the number of measurements (m) is larger than the input dimension of the generative model (k), then asymptotically, almost lossless recovery is possible. Furthermore, the performance of an efficient iterative algorithm based on projected gradient descent is studied. In this case, an auto-encoder is used to define and enforce the source structure at the projection step. The auto-encoder is defined by encoder and decoder (generative) functions f : Rn → Uk and g : Uk → Rn, respectively. We theoretically prove that, roughly, given m > 40k log 1δ measurements, such an algorithm converges to the vicinity of the desired result, even in the presence of additive white Gaussian noise. Numerical results exploring the effectiveness of the proposed method are presented. © 2020 IEEE.
引用
收藏
页码:312 / 323
页数:11
相关论文
共 50 条
  • [1] Fisher Auto-Encoders
    Elkhalil, Khalil
    Hasan, Ali
    Ding, Jie
    Farsiu, Sina
    Tarokh, Vahid
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 352 - 360
  • [2] Ornstein Auto-Encoders
    Choi, Youngwon
    Won, Joong-Ho
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2172 - 2178
  • [3] Transforming Auto-Encoders
    Hinton, Geoffrey E.
    Krizhevsky, Alex
    Wang, Sida D.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT I, 2011, 6791 : 44 - 51
  • [4] Correlated Variational Auto-Encoders
    Tang, Da
    Liang, Dawen
    Jebara, Tony
    Ruozzi, Nicholas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] Hyperspherical Variational Auto-Encoders
    Davidson, Tim R.
    Falorsi, Luca
    De Cao, Nicola
    Kipf, Thomas
    Tomczak, Jakub M.
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2018, : 856 - 865
  • [6] Directed Graph Auto-Encoders
    Kollias, Georgios
    Kalantzis, Vasileios
    Ide, Tsuyoshi
    Lozano, Aurelie
    Abe, Naoki
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7211 - 7219
  • [7] Graph Attention Auto-Encoders
    Salehi, Amin
    Davulcu, Hasan
    2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 989 - 996
  • [8] Conservativeness of Untied Auto-Encoders
    Im, Daniel Jiwoong
    Belghazi, Mohamed Ishmael
    Memisevic, Roland
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1694 - 1700
  • [9] Interpretable and effective hashing via Bernoulli variational auto-encoders
    Mena, Francisco
    Nanculef, Ricardo
    Valle, Carlos
    INTELLIGENT DATA ANALYSIS, 2020, 24 (24) : S141 - S166
  • [10] Understanding stock market instability via graph auto-encoders
    Gorduza, Dragos
    Zohren, Stefan
    Dong, Xiaowen
    EPJ DATA SCIENCE, 2025, 14 (01)