Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning

被引:0
|
作者
Zhang, Jingwei [1 ]
Liu, Tongliang [2 ,3 ,4 ]
Tao, Dacheng [2 ,3 ,4 ]
机构
[1] Hong Kong Univ Sci & Technol, Sch Engn, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[2] Univ Sydney, Sydney AI Ctr, Darlington, NSW 2008, Australia
[3] Univ Sydney, Sch Comp Sci, Darlington, NSW 2008, Australia
[4] Univ Sydney, Fac Engn, Darlington, NSW 2008, Australia
基金
澳大利亚研究理事会;
关键词
Deep learning; Training; Stability analysis; Artificial neural networks; Noise measurement; Neural networks; Mutual information; Deep neural networks (DNNs); generalization; information theory; learning theory;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has transformed computer vision, natural language processing, and speech recognition. However, two critical questions remain obscure: 1) why do deep neural networks (DNNs) generalize better than shallow networks and 2) does it always hold that a deeper network leads to better performance? In this article, we first show that the expected generalization error of neural networks (NNs) can be upper bounded by the mutual information between the learned features in the last hidden layer and the parameters of the output layer. This bound further implies that as the number of layers increases in the network, the expected generalization error will decrease under mild conditions. Layers with strict information loss, such as the convolutional or pooling layers, reduce the generalization error for the whole network; this answers the first question. However, algorithms with zero expected generalization error do not imply a small test error. This is because the expected training error is large when the information for fitting the data is lost as the number of layers increases. This suggests that the claim "the deeper the better" is conditioned on a small training error. Finally, we show that deep learning satisfies a weak notion of stability and provides some generalization error bounds for noisy stochastic gradient decent (SGD) and binary classification in DNNs.
引用
收藏
页码:16683 / 16695
页数:13
相关论文
共 50 条
  • [1] Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning
    Zhang, Jingwei
    Liu, Tongliang
    Tao, Dacheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16683 - 16695
  • [2] An Information-Theoretic Framework for Deep Learning
    Jeon, Hong Jun
    Van Roy, Benjamin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [3] INFORMATION-THEORETIC VIEW OF CONTROL
    Roy, Prateep
    Cela, Arben
    Hamam, Yskandar
    ICINCO 2009: PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL 3, 2009, : 5 - +
  • [4] An information-theoretic view on spacetime
    Saueressig, Frank
    Khosravi, Amir
    MODERN PHYSICS LETTERS A, 2021, 36 (10)
  • [5] Generalizing Movements with Information-Theoretic Stochastic Optimal Control
    Lioutikov, Rudolf
    Paraschos, Alexandros
    Peters, Jan
    Neumann, Gerhard
    JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2014, 11 (09): : 579 - 595
  • [6] An Information-Theoretic View of Array Processing
    Dmochowski, Jacek
    Benesty, Jacob
    Affes, Sofiene
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2009, 17 (02): : 392 - 401
  • [7] An information-theoretic view of network management
    Ho, T
    Médard, M
    Koetter, R
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2005, 51 (04) : 1295 - 1312
  • [8] An Information-Theoretic View of Cloud Workloads
    Varshney, Lav R.
    Ratakonda, Krishna C.
    2014 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING (IC2E), 2014, : 466 - 471
  • [9] An Information-Theoretic View of Stochastic Localization
    El Alaoui, Ahmed
    Montanari, Andrea
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (11) : 7423 - 7426
  • [10] An information-theoretic view of visual analytics
    Chen, Chaomei
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2008, 28 (01) : 18 - 23