RateML: A Code Generation Tool for Brain Network Models

被引:2
|
作者
van der Vlag, Michiel [1 ]
Woodman, Marmaduke [2 ]
Fousek, Jan [2 ]
Diaz-Pier, Sandra [1 ]
Martin, Aaron Perez [1 ]
Jirsa, Viktor [2 ]
Morrison, Abigail [1 ,3 ,4 ,5 ,6 ]
机构
[1] Forschungszentrum Julich, Inst Adv Simulat, Julich Supercomp Ctr JSC, Simulat & Data Lab Neurosci,JARA, Julich, Germany
[2] Aix Marseille Univ, Inst Neurosci Syst, Marseille, France
[3] Inst Neurosci & Med INM 6, Julich, Germany
[4] Inst Adv Simulat IAS 6, Julich, Germany
[5] JARA Inst Brain, Julich, Germany
[6] Rhein Westfal TH Aachen, Comp Sci 3 Software Engn, Aachen, Germany
来源
基金
欧盟地平线“2020”;
关键词
brain network models; domain specific language; automatic code generation; high performance computing; simulation;
D O I
10.3389/fnetp.2022.826345
中图分类号
Q4 [生理学];
学科分类号
071003 ;
摘要
Whole brain network models are now an established tool in scientific and clinical research, however their use in a larger workflow still adds significant informatics complexity. We propose a tool, RateML, that enables users to generate such models from a succinct declarative description, in which the mathematics of the model are described without specifying how their simulation should be implemented. RateML builds on NeuroML's Low Entropy Model Specification (LEMS), an XML based language for specifying models of dynamical systems, allowing descriptions of neural mass and discretized neural field models, as implemented by the Virtual Brain (TVB) simulator: the end user describes their model's mathematics once and generates and runs code for different languages, targeting both CPUs for fast single simulations and GPUs for parallel ensemble simulations. High performance parallel simulations are crucial for tuning many parameters of a model to empirical data such as functional magnetic resonance imaging (fMRI), with reasonable execution times on small or modest hardware resources. Specifically, while RateML can generate Python model code, it enables generation of Compute Unified Device Architecture C++ code for NVIDIA GPUs. When a CUDA implementation of a model is generated, a tailored model driver class is produced, enabling the user to tweak the driver by hand and perform the parameter sweep. The model and driver can be executed on any compute capable NVIDIA GPU with a high degree of parallelization, either locally or in a compute cluster environment. The results reported in this manuscript show that with the CUDA code generated by RateML, it is possible to explore thousands of parameter combinations with a single Graphics Processing Unit for different models, substantially reducing parameter exploration times and resource usage for the brain network models, in turn accelerating the research workflow itself. This provides a new tool to create efficient and broader parameter fitting workflows, support studies on larger cohorts, and derive more robust and statistically relevant conclusions about brain dynamics.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] UJECTOR: A tool for Executable Code Generation from UML Models
    Usman, Muhammad
    Nadeem, Aamer
    Kim, Tai-hoon
    PROCEEDINGS OF THE 2008 ADVANCED SOFTWARE ENGINEERING & ITS APPLICATIONS, 2008, : 165 - +
  • [2] A tool for tailored code generation from Petri net models
    Pais, Rui
    Barros, Joao Paulo
    Gomes, Luis
    ETFA 2005: 10th IEEE International Conference on Emerging Technologies and Factory Automation, Vol 1, Pts 1 and 2, Proceedings, 2005, : 857 - 864
  • [3] PetriCode: A Tool for Template-Based Code Generation from CPN Models
    Simonsen, Kent Inge Fagerland
    SOFTWARE ENGINEERING AND FORMAL METHODS, 2014, 8368 : 151 - 163
  • [4] A benchmarking tool for the generation of bipartite network models with overlapping communities
    Alan Valejo
    Fabiana Góes
    Luzia Romanetto
    Maria Cristina Ferreira de Oliveira
    Alneu de Andrade Lopes
    Knowledge and Information Systems, 2020, 62 : 1641 - 1669
  • [5] A benchmarking tool for the generation of bipartite network models with overlapping communities
    Valejo, Alan
    Goes, Fabiana
    Romanetto, Luzia
    Ferreira de Oliveira, Maria Cristina
    Lopes, Alneu de Andrade
    KNOWLEDGE AND INFORMATION SYSTEMS, 2020, 62 (04) : 1641 - 1669
  • [6] Frances: A Tool For Understanding Code Generation
    Sondag, Tyler
    Pokorny, Kian L.
    Rajan, Hridesh
    SIGCSE 10: PROCEEDINGS OF THE 41ST ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, 2010, : 12 - 16
  • [7] THE DESIGN AND IMPLEMENTATION OF A CODE GENERATION TOOL
    AKIN, TA
    LEBLANC, RJ
    SOFTWARE-PRACTICE & EXPERIENCE, 1982, 12 (11): : 1027 - 1041
  • [8] CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code
    Zhou, Shuyan
    Alon, Uri
    Agarwal, Sumit
    Neubig, Graham
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 13921 - 13937
  • [9] Code generation from UML models
    Frohner, Ákos
    Porkoláb, Zoltán
    Varga, László
    Periodica Polytechnica Electrical Engineering, 2000, 44 (02): : 141 - 157
  • [10] GenERTiCA: A tool for code generation and aspects weaving
    Wehrmeister, Marco A.
    Freitas, Edison P.
    Pereira, Carlos E.
    Rammig, Franz
    ISORC 2008: 11TH IEEE SYMPOSIUM ON OBJECT/COMPONENT/SERVICE-ORIENTED REAL-TIME DISTRIBUTED COMPUTING - PROCEEDINGS, 2008, : 234 - +