Over-parameterized Deep Nonparametric Regression for Dependent Data with Its Applications to Reinforcement Learning

被引:0
|
作者
Feng, Xingdong [1 ]
Jiao, Yuling [2 ]
Kang, Lican [3 ]
Zhang, Baqun [1 ]
Zhou, Fan [1 ]
机构
[1] Shanghai Univ Finance & Econ, Sch Stat & Management, Shanghai, Peoples R China
[2] Wuhan Univ, Hubei Key Lab Computat Sci, Sch Math & Stat, Wuhan, Peoples R China
[3] Wuhan Univ, Sch Math & Stat, Wuhan, Peoples R China
基金
中国国家自然科学基金; 上海市科技启明星计划;
关键词
Deep reinforcement learning; Low-dimensional Riemannian manifold; Penalized regression; beta-mixing; NEURAL-NETWORKS; GENERALIZATION ERROR; POLICY ITERATION; APPROXIMATION; BOUNDS; CONVERGENCE; SYSTEMS; RATES; GAME;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we provide statistical guarantees for over-parameterized deep nonparametric regression in the presence of dependent data. By decomposing the error, we establish non-asymptotic error bounds for deep estimation, which is achieved by effectively balancing the approximation and generalization errors. We have derived an approximation result for Holder functions with constrained weights. Additionally, the generalization error is bounded by the weight norm, allowing for a neural network parameter number that is much larger than the training sample size. Furthermore, we address the issue of the curse of dimensionality by assuming that the samples originate from distributions with low intrinsic dimensions. Under this assumption, we are able to overcome the challenges posed by high-dimensional spaces. By incorporating an additional error propagation mechanism, we derive oracle inequalities for the over-parameterized deep fitted Q-iteration.
引用
收藏
页数:40
相关论文
共 50 条
  • [1] Convergence Analysis for Over-Parameterized Deep Learning
    Jiao, Yuling
    Lu, Xiliang
    Wu, Peiying
    Yang, Jerry Zhijian
    COMMUNICATIONS IN COMPUTATIONAL PHYSICS, 2024, 36 (01) : 71 - 103
  • [2] Nonparametric Regression Using Over-parameterized Shallow ReLU Neural Networks
    Yang, Yunfei
    Zhou, Ding-Xuan
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 35
  • [3] Decentralized Federated Learning for Over-Parameterized Models
    Qin, Tiancheng
    Etesami, S. Rasoul
    Uribe, Cesar A.
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 5200 - 5205
  • [4] Global Convergence of Over-parameterized Deep Equilibrium Models
    Ling, Zenan
    Xie, Xingyu
    Wang, Qiuhao
    Zhang, Zongpeng
    Lin, Zhouchen
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206 : 767 - 787
  • [5] Generalization Error Bounds of Gradient Descent for Learning Over-Parameterized Deep ReLU Networks
    Cao, Yuan
    Gu, Quanquan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 3349 - 3356
  • [6] Gradient descent optimizes over-parameterized deep ReLU networks
    Difan Zou
    Yuan Cao
    Dongruo Zhou
    Quanquan Gu
    Machine Learning, 2020, 109 : 467 - 492
  • [7] Gradient descent optimizes over-parameterized deep ReLU networks
    Zou, Difan
    Cao, Yuan
    Zhou, Dongruo
    Gu, Quanquan
    MACHINE LEARNING, 2020, 109 (03) : 467 - 492
  • [8] An Improved Analysis of Training Over-parameterized Deep Neural Networks
    Zou, Difan
    Gu, Quanquan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [9] Rethinking Gauss-Newton for learning over-parameterized models
    Arbel, Michael
    Menegaux, Romain
    Wolinski, Pierre
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [10] Principal Components Bias in Over-parameterized Linear Models, and its Manifestation in Deep Neural Networks
    Hacohen, Guy
    Weinshall, Daphna
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23