Parsimonious neural networks learn interpretable physical laws

被引:19
|
作者
Desai, Saaketh [1 ,2 ]
Strachan, Alejandro [1 ,2 ]
机构
[1] Purdue Univ, Sch Mat Engn, W Lafayette, IN 47907 USA
[2] Purdue Univ, Birck Nanotechnol Ctr, W Lafayette, IN 47907 USA
关键词
D O I
10.1038/s41598-021-92278-w
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Machine learning is playing an increasing role in the physical sciences and significant progress has been made towards embedding domain knowledge into models. Less explored is its use to discover interpretable physical laws from data. We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony. The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties. In the first example, the resulting PNNs are easily interpretable as Newton's second law, expressed as a non-trivial time integrator that exhibits time-reversibility and conserves energy, where the parsimony is critical to extract underlying symmetries from the data. In the second case, the PNNs not only find the celebrated Lindemann melting law, but also new relationships that outperform it in the pareto sense of parsimony vs. accuracy.
引用
收藏
页数:9
相关论文
共 50 条
  • [11] Interpretable Deep Neural Networks for Enhancer Prediction
    Kim, Seong Gon
    Theera-Ampornpunt, Nawanol
    Grama, Ananth
    Chaterji, Somali
    PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2015, : 242 - 249
  • [12] Inducing Causal Structure for Interpretable Neural Networks
    Geiger, Atticus
    Wu, Zhengxuan
    Lu, Hanson
    Rozner, Josh
    Kreiss, Elisa
    Icard, Thomas
    Goodman, Noah D.
    Potts, Christopher
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [13] Synchronization-Inspired Interpretable Neural Networks
    Han, Wei
    Qin, Zhili
    Liu, Jiaming
    Boehm, Christian
    Shao, Junming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16762 - 16774
  • [14] Interpretable Architecture Neural Networks for Function Visualization
    Zhang, Shengtong
    Apley, Daniel W. W.
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2023, 32 (04) : 1258 - 1271
  • [15] Synchronization-Inspired Interpretable Neural Networks
    Han, Wei
    Qin, Zhili
    Liu, Jiaming
    Boehm, Christian
    Shao, Junming
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16762 - 16774
  • [16] ExplaiNN: interpretable and transparent neural networks for genomics
    Gherman Novakovsky
    Oriol Fornes
    Manu Saraswat
    Sara Mostafavi
    Wyeth W. Wasserman
    Genome Biology, 24
  • [17] Transformer neural networks for interpretable flood forecasting
    Castangia, Marco
    Grajales, Lina Maria Medina
    Aliberti, Alessandro
    Rossi, Claudio
    Macii, Alberto
    Macii, Enrico
    Patti, Edoardo
    ENVIRONMENTAL MODELLING & SOFTWARE, 2023, 160
  • [18] Silicon neural networks learn as they compute
    Paillet, G
    LASER FOCUS WORLD, 1996, 32 (08): : S17 - S19
  • [19] Biomedical Diagnosis and Prediction using Parsimonious Fuzzy Neural Networks
    Chen, Yuting
    Joo, Er Meng
    38TH ANNUAL CONFERENCE ON IEEE INDUSTRIAL ELECTRONICS SOCIETY (IECON 2012), 2012, : 1477 - 1482
  • [20] MICROCONTROLLERS LEARN TO EMBRACE NEURAL NETWORKS
    Edwards, Chris
    New Electronics, 2022, 55 (11): : 34 - 35