Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning

被引:0
|
作者
Zhang, Jingwei [1 ]
Liu, Tongliang [2 ,3 ,4 ]
Tao, Dacheng [2 ,3 ,4 ]
机构
[1] Hong Kong Univ Sci & Technol, Sch Engn, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[2] Univ Sydney, Sydney AI Ctr, Darlington, NSW 2008, Australia
[3] Univ Sydney, Sch Comp Sci, Darlington, NSW 2008, Australia
[4] Univ Sydney, Fac Engn, Darlington, NSW 2008, Australia
基金
澳大利亚研究理事会;
关键词
Deep learning; Training; Stability analysis; Artificial neural networks; Noise measurement; Neural networks; Mutual information; Deep neural networks (DNNs); generalization; information theory; learning theory;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has transformed computer vision, natural language processing, and speech recognition. However, two critical questions remain obscure: 1) why do deep neural networks (DNNs) generalize better than shallow networks and 2) does it always hold that a deeper network leads to better performance? In this article, we first show that the expected generalization error of neural networks (NNs) can be upper bounded by the mutual information between the learned features in the last hidden layer and the parameters of the output layer. This bound further implies that as the number of layers increases in the network, the expected generalization error will decrease under mild conditions. Layers with strict information loss, such as the convolutional or pooling layers, reduce the generalization error for the whole network; this answers the first question. However, algorithms with zero expected generalization error do not imply a small test error. This is because the expected training error is large when the information for fitting the data is lost as the number of layers increases. This suggests that the claim "the deeper the better" is conditioned on a small training error. Finally, we show that deep learning satisfies a weak notion of stability and provides some generalization error bounds for noisy stochastic gradient decent (SGD) and binary classification in DNNs.
引用
收藏
页码:16683 / 16695
页数:13
相关论文
共 50 条
  • [21] Information-theoretic approach to interactive learning
    Still, S.
    EPL, 2009, 85 (02)
  • [22] Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
    Furuta, Hiroki
    Matsushima, Tatsuya
    Kozuno, Tadashi
    Matsuo, Yutaka
    Levine, Sergey
    Nachum, Ofir
    Gu, Shixiang Shane
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [23] A unified view of information-theoretic aspects of cognitive radio
    Awan, F. G.
    Hanif, M. F.
    PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY: NEW GENERATIONS, 2008, : 327 - +
  • [24] Cooperative wireless cellular systems: An information-theoretic view
    New Jersey Institute of Technology , Newark, United States
    不详
    不详
    不详
    Found. Trends Commun. Inf. Theory, 1-2 (1-177):
  • [25] An Information-Theoretic View of Generalization via Wasserstein Distance
    Wang, Hao
    Diaz, Mario
    Santos Filho, Jose Candido S.
    Calmon, Flavio P.
    2019 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2019, : 577 - 581
  • [26] Unifying cost and information in information-theoretic competitive learning
    Kamimura, R
    NEURAL NETWORKS, 2005, 18 (5-6) : 711 - 718
  • [27] Generalized Information-theoretic Multi-view Clustering
    Huang, Weitian
    Yang, Sirui
    Cai, Hongmin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [28] Cooperative Wireless Cellular Systems: An Information-Theoretic View
    Simeone, Osvaldo
    Levy, Nathan
    Sanderovich, Amichai
    Somekh, Oren
    Zaidel, Benjamin M.
    Poor, H. Vincent
    Shamai, Shlomo
    FOUNDATIONS AND TRENDS IN COMMUNICATIONS AND INFORMATION THEORY, 2011, 8 (1-2): : 1 - 185
  • [29] An information-theoretic view of connectivity in wireless sensor networks
    Liu, X
    Srikant, R
    2004 FIRST ANNUAL IEEE COMMUNICATIONS SOCIETY CONFERENCE ON SENSOR AND AD HOC COMMUNICATIONS AND NETWORKS, 2004, : 508 - 516
  • [30] Forced information and information loss in information-theoretic competitive learning
    Kamimura, Ryotaro
    PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND APPLICATIONS, 2007, : 69 - 74