Neural Q-learning for solving PDEs

被引:0
|
作者
Cohen, Samuel N. [1 ]
Jiang, Deqing [1 ]
Sirignano, Justin [1 ]
机构
[1] Univ Oxford, Math Inst, Oxford OX2 6GG, England
基金
英国工程与自然科学研究理事会;
关键词
Deep learning; neural networks; high-dimensional PDEs; high-dimensional learning; Q-learning; BOUNDARY-VALUE-PROBLEMS; DIFFERENTIAL-EQUATIONS; APPROXIMATION; NETWORK; ALGORITHM; OPERATORS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Solving high-dimensional partial differential equations (PDEs) is a major challenge in scientific computing. We develop a new numerical method for solving elliptic-type PDEs by adapting the Q-learning algorithm in reinforcement learning. To solve PDEs with Dirichlet boundary condition, our "Q-PDE" algorithm is mesh-free and therefore has the potential to overcome the curse of dimensionality. Using a neural tangent kernel (NTK) approach, we prove that the neural network approximator for the PDE solution, trained with the QPDE algorithm, converges to the trajectory of an infinite-dimensional ordinary differential equation (ODE) as the number of hidden units - infinity. For monotone PDEs (i.e. those given by monotone operators, which may be nonlinear), despite the lack of a spectral gap in the NTK, we then prove that the limit neural network, which satisfies the infinite-dimensional ODE, strongly converges in L2 to the PDE solution as the training time - infinity. More generally, we can prove that any fixed point of the wide-network limit for the Q-PDE algorithm is a solution of the PDE (not necessarily under the monotone condition). The numerical performance of the Q-PDE algorithm is studied for several elliptic PDEs.
引用
收藏
页数:49
相关论文
共 50 条
  • [1] Neural Q-learning
    Stephan ten Hagen
    Ben Kröse
    Neural Computing & Applications, 2003, 12 : 81 - 88
  • [2] Neural Q-learning
    ten Hagen, S
    Kröse, B
    NEURAL COMPUTING & APPLICATIONS, 2003, 12 (02): : 81 - 88
  • [3] Deep Q-learning with hybrid quantum neural network on solving maze problems
    Chen, Hao-Yuan
    Chang, Yen-Jui
    Liao, Shih-Wei
    Chang, Ching-Ray
    QUANTUM MACHINE INTELLIGENCE, 2024, 6 (01)
  • [4] Solving Twisty Puzzles Using Parallel Q-learning
    Hukmani, Kavish
    Kolekar, Sucheta
    Vobugari, Sreekumar
    ENGINEERING LETTERS, 2021, 29 (04) : 1535 - 1543
  • [5] Neural Q-Learning Controller for Mobile Robot
    Ganapathy, Velappa
    Yun, Soh Chin
    Joe, Halim Kusama
    2009 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, VOLS 1-3, 2009, : 863 - 868
  • [6] Mobile Robot Navigation: Neural Q-Learning
    Yun, Soh Chin
    Parasuraman, S.
    Ganapathy, V.
    ADVANCES IN COMPUTING AND INFORMATION TECHNOLOGY, VOL 3, 2013, 178 : 259 - +
  • [7] Mobile robot navigation: neural Q-learning
    Parasuraman, S.
    Yun, Soh Chin
    INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2012, 44 (04) : 303 - 311
  • [8] A Novel Heuristic Q-Learning Algorithm for Solving Stochastic Games
    Li, Jianwei
    Liu, Weiyi
    2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 1135 - 1144
  • [9] Q-LEARNING
    WATKINS, CJCH
    DAYAN, P
    MACHINE LEARNING, 1992, 8 (3-4) : 279 - 292
  • [10] Neural Q-learning in Motion Planning for Mobile Robot
    Qin, Zheng
    Gu, Jason
    2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 1024 - 1028