Is L2 Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?

被引:0
|
作者
Wang, Chuwei [1 ]
Li, Shanda [2 ,5 ]
He, Di [3 ]
Wang, Liwei [3 ,4 ]
机构
[1] Peking Univ, Sch Math Sci, Beijing, Peoples R China
[2] Carnegie Mellon Univ, Sch Comp Sci, Machine Learning Dept, Pittsburgh, PA 15213 USA
[3] Peking Univ, Sch Intelligence Sci & Technol, Natl Key Lab Gen Artificial Intelligence, Beijing, Peoples R China
[4] Peking Univ, Ctr Data Sci, Beijing, Peoples R China
[5] Zhejiang Lab, Hangzhou, Peoples R China
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Physics-Informed Neural Network (PINN) approach is a new and promising way to solve partial differential equations using deep learning. The L-2 Physics-Informed Loss is the de-facto standard in training Physics-Informed Neural Networks. In this paper, we challenge this common practice by investigating the relationship between the loss function and the approximation quality of the learned solution. In particular, we leverage the concept of stability in the literature of partial differential equation to study the asymptotic behavior of the learned solution as the loss approaches zero. With this concept, we study an important class of high-dimensional non-linear PDEs in optimal control, the Hamilton-Jacobi-Bellman (HJB) Equation, and prove that for general L-p Physics-Informed Loss, a wide class of HJB equation is stable only if p is sufficiently large. Therefore, the commonly used L-2 loss is not suitable for training PINN on those equations, while L-infinity loss is a better choice. Based on the theoretical insight, we develop a novel PINN training algorithm to minimize the L-infinity loss for HJB equations which is in a similar spirit to adversarial training. The effectiveness of the proposed algorithm is empirically demonstrated through experiments. Our code is released at https://github.com/LithiumDA/L_inf-PINN.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] SOBOLEV TRAINING FOR PHYSICS-INFORMED NEURAL NETWORKS
    Son, Hwijae
    Jang, Jin woo
    Han, Woo jin
    Hwang, Hyung ju
    COMMUNICATIONS IN MATHEMATICAL SCIENCES, 2023, 21 (06) : 1679 - 1705
  • [2] Enforcing Dirichlet boundary conditions in physics-informed neural networks and variational physics-informed neural networks
    Berrone, S.
    Canuto, C.
    Pintore, M.
    Sukumar, N.
    HELIYON, 2023, 9 (08)
  • [3] Loss-attentional physics-informed neural networks
    Song, Yanjie
    Wang, He
    Yang, He
    Taccari, Maria Luisa
    Chen, Xiaohui
    JOURNAL OF COMPUTATIONAL PHYSICS, 2024, 501
  • [4] Temporal consistency loss for physics-informed neural networks
    Thakur, Sukirt
    Raissi, Maziar
    Mitra, Harsa
    Ardekani, Arezoo M.
    PHYSICS OF FLUIDS, 2024, 36 (07)
  • [5] Respecting causality for training physics-informed neural networks
    Wang, Sifan
    Sankaran, Shyam
    Perdikaris, Paris
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2024, 421
  • [6] Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning
    De Ryck, Tim
    Mishra, Siddhartha
    ACTA NUMERICA, 2024, 33 : 633 - 713
  • [7] Physics-informed Neural Network for system identification of rotors
    Liu, Xue
    Cheng, Wei
    Xing, Ji
    Chen, Xuefeng
    Zhao, Zhibin
    Zhang, Rongyong
    Huang, Qian
    Lu, Jinqi
    Zhou, Hongpeng
    Zheng, Wei Xing
    Pan, Wei
    IFAC PAPERSONLINE, 2024, 58 (15): : 307 - 312
  • [8] A Physics-Informed Recurrent Neural Network for RRAM Modeling
    Sha, Yanliang
    Lan, Jun
    Li, Yida
    Chen, Quan
    ELECTRONICS, 2023, 12 (13)
  • [9] Physics-informed Neural Network for Quadrotor Dynamical Modeling
    Gu, Weibin
    Primatesta, Stefano
    Rizzo, Alessandro
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2024, 171
  • [10] Parareal with a Physics-Informed Neural Network as Coarse Propagator
    Ibrahim, Abdul Qadir
    Goetschel, Sebastian
    Ruprecht, Daniel
    EURO-PAR 2023: PARALLEL PROCESSING, 2023, 14100 : 649 - 663