Spatially heterogeneous learning by a deep student machine

被引:0
|
作者
Yoshino, Hajime [1 ,2 ]
机构
[1] Osaka Univ, Cybermedia Ctr, Toyonaka, Osaka 5600043, Japan
[2] Osaka Univ, Grad Sch Sci, Toyonaka, Osaka 5600043, Japan
来源
PHYSICAL REVIEW RESEARCH | 2023年 / 5卷 / 03期
关键词
NEURAL-NETWORK; TRANSITION; STATES; SPACE;
D O I
10.1103/PhysRevResearch.5.033068
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Despite spectacular successes, deep neural networks (DNNs) with a huge number of adjustable parameters remain largely black boxes. To shed light on the hidden layers of DNNs, we study supervised learning by a DNN of width N and depth L consisting of NL perceptrons with c inputs by a statistical mechanics approach called the teacher-student setting. We consider an ensemble of student machines that exactly reproduce M sets of N-dimensional input/output relations provided by a teacher machine. We show that the statistical mechanics problem becomes exactly solvable in a high-dimensional limit which we call a "dense limit": N >> c >> 1 and M >> 1 with fixed & alpha; = M/c using the replica method developed by Yoshino [SciPost Phys. Core 2, 005 (2020)] In conjunction with the theoretical study, we also study the model numerically performing simple greedy Monte Carlo simulations. Simulations reveal that learning by the DNN is quite heterogeneous in the network space: configurations of the teacher and the student machines are more correlated within the layers closer to the input/output boundaries, while the central region remains much less correlated due to the overparametrization in qualitative agreement with the theoretical prediction. We evaluate the generalization error of the DNN with various depths L both theoretically and numerically. Remarkably, both the theory and the simulation suggest that the generalization ability of the student machines, which are only weakly correlated with the teacher in the center, does not vanish even in the deep limit L >> 1, where the system becomes heavily overparametrized. We also consider the impact of the effective dimension D(N) of data by incorporating the hidden manifold model [Goldt, Mezard, Krzakala, and Zdevorova, Phys. Rev. X 10, 041044 (2020)] into our model. Replica theory implies that the loop corrections to the dense limit, which reflect correlations between different nodes in the network, become enhanced by either decreasing the width N or decreasing the effective dimension D of the data. Simulation suggests that both lead to significant improvements in generalization ability.
引用
收藏
页数:28
相关论文
共 50 条
  • [41] Artificial Intelligence, Machine Learning and Deep Learning
    Ongsulee, Pariwat
    2017 15TH INTERNATIONAL CONFERENCE ON ICT AND KNOWLEDGE ENGINEERING (ICT&KE), 2017, : 92 - 97
  • [42] Machine learning after the deep learning revolution
    Wray Buntine
    Frontiers of Computer Science, 2020, 14
  • [43] Machine learning after the deep learning revolution
    Buntine, Wray
    FRONTIERS OF COMPUTER SCIENCE, 2020, 14 (06)
  • [44] Machine learning and deep learning approaches in IoT
    Javed A.
    Awais M.
    Shoaib M.
    Khurshid K.S.
    Othman M.
    PeerJ Computer Science, 2023, 9
  • [45] Learning in the machine: The symmetries of the deep learning channel
    Baldi, Pierre
    Sadowski, Peter
    Lu, Zhiqin
    NEURAL NETWORKS, 2017, 95 : 110 - 133
  • [46] A Review of Machine Learning and Deep Learning Applications
    Shinde, Pramila P.
    Shah, Seema
    2018 FOURTH INTERNATIONAL CONFERENCE ON COMPUTING COMMUNICATION CONTROL AND AUTOMATION (ICCUBEA), 2018,
  • [47] Visual inspection by deep learning and machine learning
    Nagata T.
    Hashimoto D.
    Journal of Japan Institute of Electronics Packaging, 2020, 23 (04) : 271 - 274
  • [48] Machine learning and deep learning applied in ultrasound
    Pehrson, Lea Marie
    Lauridsen, Carsten
    Nielsen, Michael Bachmann
    ULTRASCHALL IN DER MEDIZIN, 2018, 39 (04): : 379 - 381
  • [49] Machine Learning and Deep Learning Methods for Cybersecurity
    Xin, Yang
    Kong, Lingshuang
    Liu, Zhi
    Chen, Yuling
    Li, Yanmiao
    Zhu, Hongliang
    Gao, Mingcheng
    Hou, Haixia
    Wang, Chunhua
    IEEE ACCESS, 2018, 6 : 35365 - 35381
  • [50] Construction of Deep ReLU Nets for Spatially Sparse Learning
    Liu, Xia
    Wang, Di
    Lin, Shao-Bo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (10) : 7746 - 7760