Solving inverse problems via auto-encoders

被引:6
|
作者
Peng P. [1 ]
Jalali S. [2 ]
Yuan X. [2 ]
机构
[1] The Department of Electrical and Computer Engineering, Rutgers University, Piscataway, 08854, NJ
[2] Nokia Bell Labs, Murray Hill, 07974, NJ
来源
Jalali, Shirin (shirin.jalali@nokia-bell-labs.com) | 1600年 / Institute of Electrical and Electronics Engineers Inc.卷 / 01期
关键词
Auto-encoders; Compressed sensing; Deep learning; Generative models; Inverse problems;
D O I
10.1109/JSAIT.2020.2983643
中图分类号
学科分类号
摘要
Compressed sensing (CS) is about recovering a structured signal from its under-determined linear measurements. Starting from sparsity, recovery methods have steadily moved towards more complex structures. Emerging machine learning tools such as generative functions that are based on neural networks are able to learn general complex structures from training data. This makes them potentially powerful tools for designing CS algorithms. Consider a desired class of signals Q, Q ⊂ Rn, and a corresponding generative function g : Uk → Rn, U ⊂ R, such that supx∈Q minu∈Uk (Equation presented). A recovery method based on g seeks g(u) with minimum measurement error. In this paper, the performance of such a recovery method is studied, under both noisy and noiseless measurements. In the noiseless case, roughly speaking, it is proven that, as k and n grow without bound and δ converges to zero, if the number of measurements (m) is larger than the input dimension of the generative model (k), then asymptotically, almost lossless recovery is possible. Furthermore, the performance of an efficient iterative algorithm based on projected gradient descent is studied. In this case, an auto-encoder is used to define and enforce the source structure at the projection step. The auto-encoder is defined by encoder and decoder (generative) functions f : Rn → Uk and g : Uk → Rn, respectively. We theoretically prove that, roughly, given m > 40k log 1δ measurements, such an algorithm converges to the vicinity of the desired result, even in the presence of additive white Gaussian noise. Numerical results exploring the effectiveness of the proposed method are presented. © 2020 IEEE.
引用
收藏
页码:312 / 323
页数:11
相关论文
共 50 条
  • [41] Graph Auto-Encoders for Learning Edge Representations
    Rennard, Virgile
    Nikolentzos, Giannis
    Vazirgiannis, Michalis
    COMPLEX NETWORKS & THEIR APPLICATIONS IX, VOL 2, COMPLEX NETWORKS 2020, 2021, 944 : 117 - 129
  • [42] Graph Regularized Auto-Encoders for Image Representation
    Liao, Yiyi
    Wang, Yue
    Liu, Yong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (06) : 2839 - 2852
  • [43] Smile Recognition Based on Deep Auto-Encoders
    Liang, Shufen
    Liang, Xiangqun
    Guo, Min
    2015 11TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION (ICNC), 2015, : 176 - 181
  • [44] LMAE: A large margin Auto-Encoders for classification
    Liu, Weifeng
    Ma, Tengzhou
    Xie, Qiangsheng
    Tao, Dapeng
    Cheng, Jun
    SIGNAL PROCESSING, 2017, 141 : 137 - 143
  • [45] Fault detection Neural Differential Auto-encoders
    Goswami, Umang
    Kodamana, Hariprasad
    Ramteke, Manojkumar
    COMPUTERS & CHEMICAL ENGINEERING, 2024, 189
  • [46] Complete Stacked Denoising Auto-Encoders for Regression
    Fernandez-Garcia, Maria-Elena
    Sancho-Gomez, Jose-Luis
    Ros-Ros, Antonio
    Figueiras-Vidal, Anibal R.
    NEURAL PROCESSING LETTERS, 2021, 53 (01) : 787 - 797
  • [47] Dual Rejection Sampling for Wasserstein Auto-Encoders
    Hou, Liang
    Shenh, Huawei
    Cheng, Xueqi
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1190 - 1197
  • [48] Deep-learned spike representations and sorting via an ensemble of auto-encoders
    Eom, Junsik
    Park, In Yong
    Kim, Sewon
    Jang, Hanbyol
    Park, Sanggeon
    Huh, Yeowool
    Hwang, Dosik
    NEURAL NETWORKS, 2021, 134 : 131 - 142
  • [49] Bankruptcy Prediction Using Stacked Auto-Encoders
    Soui, Makram
    Smiti, Salima
    Mkaouer, Mohamed Wiem
    Ejbali, Ridha
    APPLIED ARTIFICIAL INTELLIGENCE, 2020, 34 (01) : 80 - 100
  • [50] Marginalized Denoising Auto-encoders for Nonlinear Representations
    Chen, Minmin
    Weinberger, Kilian
    Sha, Fei
    Bengio, Yoshua
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 1476 - 1484