Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning

被引:0
|
作者
Zhang, Jingwei [1 ]
Liu, Tongliang [2 ,3 ,4 ]
Tao, Dacheng [2 ,3 ,4 ]
机构
[1] Hong Kong Univ Sci & Technol, Sch Engn, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[2] Univ Sydney, Sydney AI Ctr, Darlington, NSW 2008, Australia
[3] Univ Sydney, Sch Comp Sci, Darlington, NSW 2008, Australia
[4] Univ Sydney, Fac Engn, Darlington, NSW 2008, Australia
基金
澳大利亚研究理事会;
关键词
Deep learning; Training; Stability analysis; Artificial neural networks; Noise measurement; Neural networks; Mutual information; Deep neural networks (DNNs); generalization; information theory; learning theory;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has transformed computer vision, natural language processing, and speech recognition. However, two critical questions remain obscure: 1) why do deep neural networks (DNNs) generalize better than shallow networks and 2) does it always hold that a deeper network leads to better performance? In this article, we first show that the expected generalization error of neural networks (NNs) can be upper bounded by the mutual information between the learned features in the last hidden layer and the parameters of the output layer. This bound further implies that as the number of layers increases in the network, the expected generalization error will decrease under mild conditions. Layers with strict information loss, such as the convolutional or pooling layers, reduce the generalization error for the whole network; this answers the first question. However, algorithms with zero expected generalization error do not imply a small test error. This is because the expected training error is large when the information for fitting the data is lost as the number of layers increases. This suggests that the claim "the deeper the better" is conditioned on a small training error. Finally, we show that deep learning satisfies a weak notion of stability and provides some generalization error bounds for noisy stochastic gradient decent (SGD) and binary classification in DNNs.
引用
收藏
页码:16683 / 16695
页数:13
相关论文
共 50 条
  • [41] Information maximization and cost minimization in information-theoretic competitive learning
    Kamimura, R
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), VOLS 1-5, 2005, : 202 - 207
  • [42] The information-theoretic view of quantum mechanics and the measurement problem(s)
    Federico Laudisa
    European Journal for Philosophy of Science, 2023, 13
  • [43] Forced information maximization to accelerate information-theoretic competitive learning
    Karnimura, Ryotaro
    Kitajima, Ryozo
    2007 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-6, 2007, : 1779 - 1784
  • [44] An Information-Theoretic View of Network-Aware Malware Attacks
    Chen, Zesheng
    Ji, Chuanyi
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2009, 4 (03) : 530 - 541
  • [46] An Information-Theoretic View of Spectrum Leasing via Secondary Cooperation
    Elkourdi, Tariq
    Simeone, Osvaldo
    2010 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2010,
  • [48] Information-Theoretic Transfer Learning Framework for Bayesian Optimisation
    Ramachandran, Anil
    Gupta, Sunil
    Rana, Santu
    Venkatesh, Svetha
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2018, PT II, 2019, 11052 : 827 - 842
  • [49] An Information-Theoretic Approach for Multi-task Learning
    Yang, Pei
    Tan, Qi
    Xu, Hao
    Ding, Yehua
    ADVANCED DATA MINING AND APPLICATIONS, PROCEEDINGS, 2009, 5678 : 386 - 396
  • [50] Information-theoretic limits of Bayesian network structure learning
    Ghoshal, Asish
    Honorio, Jean
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 54, 2017, 54 : 767 - 775