Solving inverse problems via auto-encoders

被引:6
|
作者
Peng P. [1 ]
Jalali S. [2 ]
Yuan X. [2 ]
机构
[1] The Department of Electrical and Computer Engineering, Rutgers University, Piscataway, 08854, NJ
[2] Nokia Bell Labs, Murray Hill, 07974, NJ
来源
Jalali, Shirin (shirin.jalali@nokia-bell-labs.com) | 1600年 / Institute of Electrical and Electronics Engineers Inc.卷 / 01期
关键词
Auto-encoders; Compressed sensing; Deep learning; Generative models; Inverse problems;
D O I
10.1109/JSAIT.2020.2983643
中图分类号
学科分类号
摘要
Compressed sensing (CS) is about recovering a structured signal from its under-determined linear measurements. Starting from sparsity, recovery methods have steadily moved towards more complex structures. Emerging machine learning tools such as generative functions that are based on neural networks are able to learn general complex structures from training data. This makes them potentially powerful tools for designing CS algorithms. Consider a desired class of signals Q, Q ⊂ Rn, and a corresponding generative function g : Uk → Rn, U ⊂ R, such that supx∈Q minu∈Uk (Equation presented). A recovery method based on g seeks g(u) with minimum measurement error. In this paper, the performance of such a recovery method is studied, under both noisy and noiseless measurements. In the noiseless case, roughly speaking, it is proven that, as k and n grow without bound and δ converges to zero, if the number of measurements (m) is larger than the input dimension of the generative model (k), then asymptotically, almost lossless recovery is possible. Furthermore, the performance of an efficient iterative algorithm based on projected gradient descent is studied. In this case, an auto-encoder is used to define and enforce the source structure at the projection step. The auto-encoder is defined by encoder and decoder (generative) functions f : Rn → Uk and g : Uk → Rn, respectively. We theoretically prove that, roughly, given m > 40k log 1δ measurements, such an algorithm converges to the vicinity of the desired result, even in the presence of additive white Gaussian noise. Numerical results exploring the effectiveness of the proposed method are presented. © 2020 IEEE.
引用
收藏
页码:312 / 323
页数:11
相关论文
共 50 条
  • [31] Sparse Wavelet Auto-Encoders for Image classification
    Hassairi, Salima
    Ejbali, Ridha
    Zaied, Mourad
    2016 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2016, : 625 - 630
  • [32] A hybrid learning model based on auto-encoders
    Zhou, Ju
    Ju, Li
    Zhang, Xiaolong
    PROCEEDINGS OF THE 2017 12TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2017, : 522 - 528
  • [33] HGATE: Heterogeneous Graph Attention Auto-Encoders
    Wang, Wei
    Suo, Xiaoyang
    Wei, Xiangyu
    Wang, Bin
    Wang, Hao
    Dai, Hong-Ning
    Zhang, Xiangliang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 3938 - 3951
  • [34] Complete Stacked Denoising Auto-Encoders for Regression
    María-Elena Fernández-García
    José-Luis Sancho-Gómez
    Antonio Ros-Ros
    Aníbal R. Figueiras-Vidal
    Neural Processing Letters, 2021, 53 : 787 - 797
  • [35] Comparison of Auto-encoders with Different Sparsity Regularizers
    Zhang, Li
    Lu, Yaping
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [36] Improving Performance on Problems with Few Labelled Data by Reusing Stacked Auto-Encoders
    Amaral, Telmo
    Kandaswamy, Chetak
    Silva, Luis M.
    Alexandre, Luis A.
    de Sa, Joaquim Marques
    Santos, Jorge M.
    2014 13TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2014, : 367 - 372
  • [37] Self-Supervised Variational Auto-Encoders
    Gatopoulos, Ioannis
    Tomczak, Jakub M.
    ENTROPY, 2021, 23 (06)
  • [38] Genomic data imputation with variational auto-encoders
    Qiu, Yeping Lina
    Zheng, Hong
    Gevaert, Olivier
    GIGASCIENCE, 2020, 9 (08):
  • [39] InvMap and Witness Simplicial Variational Auto-Encoders
    Medbouhi, Aniss Aiman
    Polianskii, Vladislav
    Varava, Anastasia
    Kragic, Danica
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2023, 5 (01): : 199 - 236
  • [40] UNDERSTANDING LINEAR STYLE TRANSFER AUTO-ENCODERS
    Pradhan, Ian
    Lyu, Siwei
    2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,