Provable Bounds for Learning Some Deep Representations

被引:0
|
作者
Arora, Sanjeev [1 ,2 ]
Bhaskara, Aditya [3 ]
Ge, Rong [4 ]
Ma, Tengyu [1 ,2 ]
机构
[1] Princeton Univ, Comp Sci Dept, Princeton, NJ 08540 USA
[2] Princeton Univ, Ctr Computat Intractabil, Princeton, NJ 08540 USA
[3] Google Res, New York, NY 10011 USA
[4] Microsoft Res, Cambridge, MA 02142 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We give algorithms with provable guarantees that learn a class of deep nets in the generative model view popularized by Hinton and others. Our generative model is an n node multilayer network that has degree at most n(gamma) for some gamma < 1 and each edge has a random edge weight in [-1, 1]. Our algorithm learns almost all networks in this class with polynomial running time. The sample complexity is quadratic or cubic depending upon the details of the model. The algorithm uses layerwise learning. It is based upon a novel idea of observing correlations among features and using these to infer the underlying edge structure via a global graph recovery procedure. The analysis of the algorithm reveals interesting structure of neural nets with random edge weights.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Learning deep representations of enzyme thermal adaptation
    Li, Gang
    Buric, Filip
    Zrimec, Jan
    Viknander, Sandra
    Nielsen, Jens
    Zelezniak, Aleksej
    Engqvist, Martin K. M.
    PROTEIN SCIENCE, 2022, 31 (12)
  • [32] Invariant representations in deep learning for optoacoustic imaging
    Vera, M.
    Gonzalez, M. G.
    Vega, L. Rey
    REVIEW OF SCIENTIFIC INSTRUMENTS, 2023, 94 (05):
  • [33] Learning Deep Representations with Probabilistic Knowledge Transfer
    Passalis, Nikolaos
    Tefas, Anastasios
    COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 : 283 - 299
  • [34] A novel heuristic and provable bounds for reconfigurable architecture design
    Smith, Alastair M.
    Constantinides, George A.
    Cheung, Peter Y. K.
    FCCM 2006: 14TH ANNUAL IEEE SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES, PROCEEDINGS, 2006, : 275 - +
  • [35] On the Symmetries of Deep Learning Models and their Internal Representations
    Godfrey, Charles
    Brown, Davis
    Emerson, Tegan
    Kvinge, Henry
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [36] PathologyGAN: Learning deep representations of cancer tissue
    Quiros, Adalberto Claudio
    Murray-Smith, Roderick
    Yuan, Ke
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 121, 2020, 121 : 669 - 695
  • [37] Learning Extremal Representations with Deep Archetypal Analysis
    Keller, Sebastian Mathias
    Samarin, Maxim
    Torres, Fabricio Arend
    Wieser, Mario
    Roth, Volker
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (04) : 805 - 820
  • [38] Learning Extremal Representations with Deep Archetypal Analysis
    Sebastian Mathias Keller
    Maxim Samarin
    Fabricio Arend Torres
    Mario Wieser
    Volker Roth
    International Journal of Computer Vision, 2021, 129 : 805 - 820
  • [39] SKELETAL POINT REPRESENTATIONS WITH GEOMETRIC DEEP LEARNING
    Khargonkar, Ninad
    Paniagua, Beatriz
    Vicory, Jared
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [40] Information Theoretical Analysis of Deep Learning Representations
    Furusho, Yasutaka
    Kubo, Takatomi
    Ikeda, Kazushi
    NEURAL INFORMATION PROCESSING, PT I, 2015, 9489 : 599 - 605