MULTI-FIDELITY PHYSICS-CONSTRAINED NEURAL NETWORKS WITH MINIMAX ARCHITECTURE FOR MATERIALS MODELING

被引:0
|
作者
Liu, Dehao [1 ]
Pusarla, Pranav [2 ]
Wang, Yan [2 ]
机构
[1] SUNY Binghamton, Binghamton, NY 13902 USA
[2] Georgia Inst Technol, Atlanta, GA 30332 USA
关键词
Machine learning; Physics-constrained neural networks; Multi-fidelity metamodeling; Minimax optimization; Partial differential equations; NUMERICAL-SOLUTION; APPROXIMATIONS;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Data sparsity is still the main challenge to apply machine learning models to solve complex scientific and engineering problems. The root cause is the "curse of dimensionality" in training these models. Training algorithms need to explore and exploit in a very high dimensional parameter space to search the optimal parameters for complex models. In this work, a new scheme of multi-fidelity physics-constrained neural networks with minimax architecture is proposed to improve the data efficiency of training neural networks by incorporating physical knowledge as constraints and sampling data with various fidelities. In this new framework, fully-connected neural networks with two levels of fidelities are combined to improve the prediction accuracy. The low-fidelity neural network is used to approximate the low-fidelity data, whereas the high-fidelity neural network is adopted to approximate the correlation function between the low-fidelity and high-fidelity data. To systematically search the optimal weights of various losses for reducing the training time, the Dual-Dimer algorithm is adopted to search high-order saddle points of the minimax optimization problem. The proposed framework is demonstrated with two-dimensional heat transfer, phase transition, and dendritic growth problems, which are fundamental in materials modeling. With the same set of training data, the prediction error of the multi-fidelity physics-constrained neural network with minimax architecture can be two orders of magnitude lower than that of the multi-fidelity neural network with minimax architecture.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Transfer learning based multi-fidelity physics informed deep neural network
    Chakraborty, Souvik
    JOURNAL OF COMPUTATIONAL PHYSICS, 2021, 426
  • [32] Multi-fidelity Hierarchical Neural Processes
    Wu, Dongxia
    Chinazzi, Matteo
    Vespignani, Alessandro
    Ma, Yi-An
    Yu, Rose
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2029 - 2038
  • [33] Aleatory uncertainty quantification based on multi-fidelity deep neural networks
    Li, Zhihui
    Montomoli, Francesco
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2024, 245
  • [34] Multi-fidelity modeling to predict the rheological properties of a suspension of fibers using neural networks and Gaussian processes
    Boodaghidizaji, Miad
    Khan, Monsurul
    Ardekani, Arezoo M.
    PHYSICS OF FLUIDS, 2022, 34 (05)
  • [35] Physics-constrained convolutional neural networks for inverse problems in spatiotemporal partial differential equations
    Kelshaw, Daniel
    Magri, Luca
    DATA-CENTRIC ENGINEERING, 2024, 5
  • [36] Deep Dynamics: Vehicle Dynamics Modeling With a Physics-Constrained Neural Network for Autonomous Racing
    Chrosniak, John
    Ning, Jingyun
    Behl, Madhur
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (06) : 5292 - 5297
  • [37] Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks
    Geneva, Nicholas
    Zabaras, Nicholas
    JOURNAL OF COMPUTATIONAL PHYSICS, 2020, 403
  • [38] Physics-constrained graph modeling for building thermal dynamics
    Yang, Ziyao
    Gaidhane, Amol D.
    Drgona, Jan
    Chandan, Vikas
    Halappanavar, Mahantesh M.
    Liu, Frank
    Cao, Yu
    ENERGY AND AI, 2024, 16
  • [39] Physics-Constrained Bayesian Neural Network for Bias and Variance Reduction
    Malashkhia, Luka
    Liu, Dehao
    Lu, Yanglong
    Wang, Yan
    JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2023, 23 (01)
  • [40] General Multi-Fidelity Framework for Training Artificial Neural Networks With Computational Models
    Aydin, Roland Can
    Braeu, Fabian Albert
    Cyron, Christian Johannes
    FRONTIERS IN MATERIALS, 2019, 6