Solving inverse problems via auto-encoders

被引:6
|
作者
Peng P. [1 ]
Jalali S. [2 ]
Yuan X. [2 ]
机构
[1] The Department of Electrical and Computer Engineering, Rutgers University, Piscataway, 08854, NJ
[2] Nokia Bell Labs, Murray Hill, 07974, NJ
来源
Jalali, Shirin (shirin.jalali@nokia-bell-labs.com) | 1600年 / Institute of Electrical and Electronics Engineers Inc.卷 / 01期
关键词
Auto-encoders; Compressed sensing; Deep learning; Generative models; Inverse problems;
D O I
10.1109/JSAIT.2020.2983643
中图分类号
学科分类号
摘要
Compressed sensing (CS) is about recovering a structured signal from its under-determined linear measurements. Starting from sparsity, recovery methods have steadily moved towards more complex structures. Emerging machine learning tools such as generative functions that are based on neural networks are able to learn general complex structures from training data. This makes them potentially powerful tools for designing CS algorithms. Consider a desired class of signals Q, Q ⊂ Rn, and a corresponding generative function g : Uk → Rn, U ⊂ R, such that supx∈Q minu∈Uk (Equation presented). A recovery method based on g seeks g(u) with minimum measurement error. In this paper, the performance of such a recovery method is studied, under both noisy and noiseless measurements. In the noiseless case, roughly speaking, it is proven that, as k and n grow without bound and δ converges to zero, if the number of measurements (m) is larger than the input dimension of the generative model (k), then asymptotically, almost lossless recovery is possible. Furthermore, the performance of an efficient iterative algorithm based on projected gradient descent is studied. In this case, an auto-encoder is used to define and enforce the source structure at the projection step. The auto-encoder is defined by encoder and decoder (generative) functions f : Rn → Uk and g : Uk → Rn, respectively. We theoretically prove that, roughly, given m > 40k log 1δ measurements, such an algorithm converges to the vicinity of the desired result, even in the presence of additive white Gaussian noise. Numerical results exploring the effectiveness of the proposed method are presented. © 2020 IEEE.
引用
收藏
页码:312 / 323
页数:11
相关论文
共 50 条
  • [21] Unsupervised Hyperbolic Representation Learning via Message Passing Auto-Encoders
    Park, Jiwoong
    Cho, Junho
    Chang, Hyung Jin
    Choi, Jin Young
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5512 - 5522
  • [22] Anomaly detection of spectrum in wireless communication via deep auto-encoders
    Qingsong Feng
    Yabin Zhang
    Chao Li
    Zheng Dou
    Jin Wang
    The Journal of Supercomputing, 2017, 73 : 3161 - 3178
  • [23] Anomaly detection of spectrum in wireless communication via deep auto-encoders
    Feng, Qingsong
    Zhang, Yabin
    Li, Chao
    Dou, Zheng
    Wang, Jin
    JOURNAL OF SUPERCOMPUTING, 2017, 73 (07): : 3161 - 3178
  • [24] Unsupervised Extraction of Video Highlights Via Robust Recurrent Auto-encoders
    Yang, Huan
    Wang, Baoyuan
    Lin, Stephen
    Wipf, David
    Guo, Minyi
    Guo, Baining
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 4633 - 4641
  • [25] AutoFM: an efficient factorization machine model via probabilistic auto-encoders
    Tianlin Huang
    Lvqing Bi
    Ning Wang
    Defu Zhang
    Neural Computing and Applications, 2021, 33 : 9451 - 9466
  • [26] AutoFM: an efficient factorization machine model via probabilistic auto-encoders
    Huang, Tianlin
    Bi, Lvqing
    Wang, Ning
    Zhang, Defu
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (15): : 9451 - 9466
  • [27] Clustering Noisy Trajectories via Robust Deep Attention Auto-encoders
    Zhang, Rui
    Xie, Peng
    Jiang, Hongbo
    Wang, Chen
    Xiao, Zhu
    Liu, Ling
    2019 20TH INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT (MDM 2019), 2019, : 63 - 71
  • [28] Radon-Sobolev Variational Auto-Encoders
    Turinici, Gabriel
    NEURAL NETWORKS, 2021, 141 : 294 - 305
  • [29] HSAE: A Hessian regularized sparse auto-encoders
    Liu, Weifeng
    Ma, Tengzhou
    Tao, Dapeng
    You, Jane
    NEUROCOMPUTING, 2016, 187 : 59 - 65
  • [30] Feature Selection using Multiple Auto-Encoders
    Guo, Xinyu
    Minai, Ali A.
    Lu, Long J.
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 4602 - 4609