Applying statistical learning theory to deep learning

被引:0
|
作者
Gerbelot, Cedric [1 ]
Karagulyan, Avetik [2 ]
Karp, Stefani [3 ,4 ]
Ravichandran, Kavya [5 ]
Stern, Menachem [6 ]
Srebro, Nathan [5 ]
机构
[1] Courant Inst Math Sci, New York, NY 10012 USA
[2] King Abdullah Univ Sci & Technol, Thuwal 23955, Saudi Arabia
[3] Carnegie Mellon Univ, Pittsburgh, PA USA
[4] Google Res, New York, NY USA
[5] Toyota Technol Inst, Chicago, IL 60637 USA
[6] Univ Penn, Dept Phys & Astron, Philadelphia, PA USA
关键词
machine learning; learning theory; deep learning; analysis of algorithms; 1ST-ORDER METHODS; BOUNDS;
D O I
10.1088/1742-5468/ad3a5f
中图分类号
O3 [力学];
学科分类号
08 ; 0801 ;
摘要
Although statistical learning theory provides a robust framework to understand supervised learning, many theoretical aspects of deep learning remain unclear;in particular, how different architectures may lead to inductive bias when trained using gradient-based methods. The goal of these lectures is to provide an overview of some of the main questions that arise when attempting to understand deep learning from a learning theory perspective. After a brief reminder on statistical learning theory and stochastic optimization, we discuss implicit bias in the context of benign overfitting. We then move to a general description of the mirror descent algorithm, showing how we may go back and forth between a parameter space and the corresponding function space for a given learning problem, as well as how the geometry of the learning problem may be represented by a metric tensor. Building on this framework, we provide a detailed study of the implicit bias of gradient descent on linear diagonal networks for various regression tasks, showing how the loss function, scale of parameters at initialization and depth of the network may lead to various forms of implicit bias; in particular, transitioning between kernel and feature learning regimes.
引用
收藏
页数:64
相关论文
共 50 条
  • [1] Understanding and Applying Deep Learning
    Lippmann, Richard
    Neural Computation, 2022, 34 : 1 - 22
  • [2] Understanding and Applying Deep Learning
    Lippmann, Richard
    NEURAL COMPUTATION, 2023, 35 (03) : 287 - 308
  • [3] PROBABILITY LEARNING IN STATISTICAL LEARNING THEORY
    FEICHTINGER, G
    METRIKA, 1971, 18 (01) : 35 - 55
  • [4] Applying learning theory in the consultation
    Cohen, Stuart
    Dennick, Reg
    CLINICAL TEACHER, 2009, 6 (02): : 117 - 121
  • [5] A Statistical Learning Model with Deep Learning Characteristics
    Liao, Lei
    Huang, Zhiqiu
    Wang, Wengjie
    51ST ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS (DSN-W 2021), 2021, : 137 - 140
  • [6] Rethinking statistical learning theory: learning using statistical invariants
    Vapnik, Vladimir
    Izmailov, Rauf
    MACHINE LEARNING, 2019, 108 (03) : 381 - 423
  • [7] Complete Statistical Theory of Learning (Learning Using Statistical Invariants)
    Vapnik, Vladimir
    Izmailov, Rauf
    CONFORMAL AND PROBABILISTIC PREDICTION AND APPLICATIONS, VOL 128, 2020, 128 : 4 - 40
  • [8] Rethinking statistical learning theory: learning using statistical invariants
    Vladimir Vapnik
    Rauf Izmailov
    Machine Learning, 2019, 108 : 381 - 423
  • [9] Statistical Mechanics of Deep Learning
    Bahri, Yasaman
    Kadmon, Jonathan
    Pennington, Jeffrey
    Schoenholz, Sam S.
    Sohl-Dickstein, Jascha
    Ganguli, Surya
    ANNUAL REVIEW OF CONDENSED MATTER PHYSICS, VOL 11, 2020, 2020, 11 : 501 - 528
  • [10] Statistical mechanics of deep learning
    Behrens, Freya
    Mainali, Nischal
    Marullo, Chiara
    Lee, Sebastian
    Sorscher, Ben
    Sompolinsky, Haim
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2024, 2024 (10):