Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks

被引:0
|
作者
Magen, Roey [1 ]
Shamir, Ohad [1 ]
机构
[1] Weizmann Inst Sci, Rehovot, Israel
基金
欧洲研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We provide several new results on the sample complexity of vector-valued linear predictors (parameterized by a matrix), and more generally neural networks. Focusing on size-independent bounds, where only the Frobenius norm distance of the parameters from some fixed reference matrix W0 is controlled, we show that the sample complexity behavior can be surprisingly different than what we may expect considering the well-studied setting of scalar-valued linear predictors. This also leads to new sample complexity bounds for feed-forward neural networks, tackling some open questions in the literature, and establishing a new convex linear prediction problem that is provably learnable without uniform convergence.
引用
收藏
页数:27
相关论文
共 50 条
  • [41] On the Complexity of Learning Neural Networks
    Song, Le
    Vempala, Santosh
    Wilmes, John
    Xie, Bo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [42] Conceptual complexity of neural networks
    Szymanski, Lech
    McCane, Brendan
    Atkinson, Craig
    Neurocomputing, 2022, 469 : 52 - 64
  • [43] ABOUT AN ALGORITHM FOR CONSISTENT WEIGHTS INITIALIZATION OF DEEP NEURAL NETWORKS AND NEURAL NETWORKS ENSEMBLE LEARNING
    Drokin, I. S.
    VESTNIK SANKT-PETERBURGSKOGO UNIVERSITETA SERIYA 10 PRIKLADNAYA MATEMATIKA INFORMATIKA PROTSESSY UPRAVLENIYA, 2016, 12 (04): : 66 - 74
  • [44] On the Number of Linear Functions Composing Deep Neural Network: Towards a Refined Definition of Neural Networks Complexity
    Takai, Yuuki
    Sannai, Akiyoshi
    Cordonnier, Matthieu
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [45] Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks
    Shin, Yeonjong
    ANALYSIS AND APPLICATIONS, 2022, 20 (01) : 73 - 119
  • [46] Efficient Quantum State Sample Tomography with Basis-Dependent Neural Networks
    Smith, Alistair W. R.
    Gray, Johnnie
    Kim, M. S.
    PRX QUANTUM, 2021, 2 (02):
  • [47] On the Sample Complexity of the Linear Quadratic Gaussian Regulator
    Al Makdah, Abed AlRahman
    Pasqualetti, Fabio
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 602 - 609
  • [48] On the Role of Noise in the Sample Complexity of Learning Recurrent Neural Networks: Exponential Gaps for Long Sequences
    Pour, Alireza F.
    Ashtiani, Hassan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [49] Unlabeled PCA-shuffling initialization for convolutional neural networks
    Ou, Jun
    Li, Yujian
    Shen, Chengkai
    APPLIED INTELLIGENCE, 2018, 48 (12) : 4565 - 4576
  • [50] Power-law initialization algorithm for convolutional neural networks
    Kaiwen Jiang
    Jian Liu
    Tongtong Xing
    Shujing Li
    Shunyao Wu
    Fengjing Shao
    Rencheng Sun
    Neural Computing and Applications, 2023, 35 : 22431 - 22447