Spurious Local Minima are Common in Two-Layer ReLU Neural Networks

被引:0
|
作者
Safran, Itay [1 ]
Shamir, Ohad [1 ]
机构
[1] Weizmann Inst Sci, Rehovot, Israel
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the optimization problem associated with training simple ReLU neural networks of the form x bar right arrow Sigma(k)(i=1)max{0, w(i)(inverted perpendicular) x} with respect to the squared loss. We provide a computer-assisted proof that even if the input distribution is standard Gaussian, even if the dimension is arbitrarily large, and even if the target values are generated by such a network, with orthonormal parameter vectors, the problem can still have spurious local minima once 6 <= k <= 20. By a concentration of measure argument, this implies that in high input dimensions, nearly all target networks of the relevant sizes lead to spurious local minima Moreover, we conduct experiments which show that the probability of hitting such local minima is quite high, and increasing with the network size. On the positive side, mild over-parameterization appears to drastically reduce such local minima, indicating that an over-parameterization assumption is necessary to get a positive result in this setting.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Sharp asymptotics on the compression of two-layer neural networks
    Amani, Mohammad Hossein
    Bombari, Simone
    Mondelli, Marco
    Pukdee, Rattana
    Rini, Stefano
    2022 IEEE INFORMATION THEORY WORKSHOP (ITW), 2022, : 588 - 593
  • [32] Structural synthesis of two-layer rapid neural networks
    Dorogov, A.Yu.
    Kibernetika i Sistemnyj Analiz, 2000, (04): : 47 - 57
  • [33] TWO-LAYER NEURAL NETWORKS WITH VALUES IN A BANACH SPACE
    Korolev, Yury
    SIAM JOURNAL ON MATHEMATICAL ANALYSIS, 2022, 54 (06) : 6358 - 6389
  • [34] Structural synthesis of fast two-layer neural networks
    Dorogov, AY
    CYBERNETICS AND SYSTEMS ANALYSIS, 2000, 36 (04) : 512 - 519
  • [35] Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
    Mishkin, Aaron
    Sahiner, Arda
    Pilanci, Mert
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [36] Learning Two-Layer ReLU Networks Is Nearly as Easy as Learning Linear Classifiers on Separable Data
    Yang, Qiuling
    Sadeghi, Alireza
    Wang, Gang
    Sun, Jian
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 4416 - 4427
  • [37] LEARNING IN NEURAL NETWORKS WITH LOCAL MINIMA
    HESKES, TM
    SLIJPEN, ETP
    KAPPEN, B
    PHYSICAL REVIEW A, 1992, 46 (08): : 5221 - 5231
  • [38] Adaptive two-layer ReLU neural network: I. Best least-squares approximation
    Liu, Min
    Cai, Zhiqiang
    Chen, Jingshuang
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2022, 113 : 34 - 44
  • [39] A RELU DENSE LAYER TO IMPROVE THE PERFORMANCE OF NEURAL NETWORKS
    Javid, Alireza M.
    Das, Sandipan
    Skoglund, Mikael
    Chatterjee, Saikat
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2810 - 2814
  • [40] On the Omnipresence of Spurious Local Minima in Certain Neural Network Training Problems
    Christof, Constantin
    Kowalczyk, Julia
    CONSTRUCTIVE APPROXIMATION, 2024, 60 (02) : 197 - 224