Neural parameter calibration for large-scale multiagent models

被引:11
|
作者
Gaskin, Thomas [1 ]
Pavliotis, Grigorios A. [1 ,2 ]
Girolam, Mark [1 ,3 ,4 ]
机构
[1] Univ Cambridge, Dept Appl Math & Theoret Phys, Cambridge CB3 0WA, England
[2] Imperial Coll London, Dept Math, London SW7 2AZ, England
[3] Univ Cambridge, Dept Engn, Cambridge CB2 1PZ, England
[4] Alan Turing Inst, London NW1 2DB, England
基金
英国工程与自然科学研究理事会;
关键词
PHASE-TRANSITION; INVERSE PROBLEMS; NETWORKS; DYNAMICS; PHYSICS;
D O I
10.1073/pnas.2216415120
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Computational models have become a powerful tool in the quantitative sciences to understand the behavior of complex systems that evolve in time. However, they often contain a potentially large number of free parameters whose values cannot be obtained from theory but need to be inferred from data. This is especially the case for models in the social sciences, economics, or computational epidemiology. Yet, many current parameter estimation methods are mathematically involved and computationally slow to run. In this paper, we present a computationally simple and fast method to retrieve accurate probability densities for model parameters using neural differential equations. We present a pipeline comprising multiagent models acting as forward solvers for systems of ordinary or stochastic differential equations and a neural network to then extract parameters from the data generated by the model. The two combined create a powerful tool that can quickly estimate densities on model parameters, even for very large systems. We demonstrate the method on synthetic time series data of the SIR model of the spread of infection and perform an in-depth analysis of the Harris- Wilson model of economic activity on a network, representing a nonconvex problem. For the latter, we apply our method both to synthetic data and to data of economic activity across Greater London. We find that our method calibrates the model orders of magnitude more accurately than a previous study of the same dataset using classical techniques, while running between 195 and 390 times faster.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] A large-scale laser plane calibration system
    Ma, Liqun
    Wang, Liding
    Cao, Tieze
    Wang, Jihu
    He, Xiaomei
    Xiong, Changyou
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2007, 18 (06) : 1768 - 1772
  • [42] LARGE-SCALE MODELS AND LARGE-SCALE THINKING - THE CASE OF THE HEALTH-SERVICES
    SMITH, P
    OMEGA-INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE, 1995, 23 (02): : 145 - 157
  • [43] Signaling in large-scale neural networks
    Berg, Rune W.
    Hounsgaard, Jorn
    COGNITIVE PROCESSING, 2009, 10 : S9 - S15
  • [45] Stability of large-scale distributed parameter systems
    Ladde, GS
    Li, TT
    DYNAMIC SYSTEMS AND APPLICATIONS, 2002, 11 (03): : 311 - 323
  • [46] Signaling in large-scale neural networks
    Rune W. Berg
    Jørn Hounsgaard
    Cognitive Processing, 2009, 10 : 9 - 15
  • [47] Visualization of Large-Scale Neural Simulations
    Hernando, Juan B.
    Duelo, Carlos
    Martin, Vicente
    BRAIN-INSPIRED COMPUTING, 2014, 8603 : 184 - 197
  • [48] Principles of large-scale neural interactions
    Vinck, Martin
    Uran, Cem
    Spyropoulos, Georgios
    Onorato, Irene
    Broggini, Ana Clara
    Schneider, Marius
    Canales-Johnson, Andres
    NEURON, 2023, 111 (07) : 987 - 1002
  • [49] Large-Scale Training in Neural Compact Models for Accurate and Adaptable MOSFET Simulation
    Park, Chanwoo
    Lee, Seungjun
    Park, Junghwan
    Rim, Kyungjin
    Park, Jihun
    Cho, Seonggook
    Jeon, Jongwook
    Cho, Hyunbo
    IEEE JOURNAL OF THE ELECTRON DEVICES SOCIETY, 2024, 12 : 745 - 751
  • [50] A GPU-specialized Inference Parameter Server for Large-Scale Deep Recommendation Models
    Wei, Yingcan
    Langer, Matthias
    Yu, Fan
    Lee, Minseok
    Liu, Kingsley
    Shi, Jerry
    Wang, Joey
    PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 408 - 419