van der Vlag Michiel, Woodman Marmaduke, Fousek Jan, Diaz-Pier Sandra, Pérez Martín Aarón, Jirsa Viktor, Morrison Abigail
Simulation and Data Lab Neuroscience, Institute for Advanced Simulation, Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich GmbH, JARA, Jülich, Germany.
Institut de Neurosciences des Systèmes, Aix Marseille Université, Marseille, France.
Front Netw Physiol. 2022 Feb 14;2:826345. doi: 10.3389/fnetp.2022.826345. eCollection 2022.
Whole brain network models are now an established tool in scientific and clinical research, however their use in a larger workflow still adds significant informatics complexity. We propose a tool, RateML, that enables users to generate such models from a succinct declarative description, in which the mathematics of the model are described without specifying how their simulation should be implemented. RateML builds on NeuroML's Low Entropy Model Specification (LEMS), an XML based language for specifying models of dynamical systems, allowing descriptions of neural mass and discretized neural field models, as implemented by the Virtual Brain (TVB) simulator: the end user describes their model's mathematics once and generates and runs code for different languages, targeting both CPUs for fast single simulations and GPUs for parallel ensemble simulations. High performance parallel simulations are crucial for tuning many parameters of a model to empirical data such as functional magnetic resonance imaging (fMRI), with reasonable execution times on small or modest hardware resources. Specifically, while RateML can generate Python model code, it enables generation of Compute Unified Device Architecture C++ code for NVIDIA GPUs. When a CUDA implementation of a model is generated, a tailored model driver class is produced, enabling the user to tweak the driver by hand and perform the parameter sweep. The model and driver can be executed on any compute capable NVIDIA GPU with a high degree of parallelization, either locally or in a compute cluster environment. The results reported in this manuscript show that with the CUDA code generated by RateML, it is possible to explore thousands of parameter combinations with a single Graphics Processing Unit for different models, substantially reducing parameter exploration times and resource usage for the brain network models, in turn accelerating the research workflow itself. This provides a new tool to create efficient and broader parameter fitting workflows, support studies on larger cohorts, and derive more robust and statistically relevant conclusions about brain dynamics.
全脑网络模型如今已成为科学和临床研究中的既定工具,然而在更大的工作流程中使用它们仍会增加显著的信息学复杂性。我们提出了一种工具RateML,它能让用户从简洁的声明性描述中生成此类模型,在这种描述中,模型的数学原理得以描述,而无需指定其模拟应如何实现。RateML基于NeuroML的低熵模型规范(LEMS)构建,LEMS是一种用于指定动态系统模型的基于XML的语言,允许描述神经团块和离散神经场模型,由虚拟大脑(TVB)模拟器实现:最终用户只需描述一次模型的数学原理,就能为不同语言生成并运行代码,既可以针对CPU进行快速单模拟,也可以针对GPU进行并行整体模拟。高性能并行模拟对于将模型的许多参数调整到诸如功能磁共振成像(fMRI)等经验数据至关重要,并且在小型或适度的硬件资源上具有合理的执行时间。具体而言,虽然RateML可以生成Python模型代码,但它能够为NVIDIA GPU生成统一计算设备架构C++代码。当生成模型的CUDA实现时,会产生一个定制的模型驱动类,使用户能够手动调整驱动并执行参数扫描。该模型和驱动可以在任何具有高度并行化能力的NVIDIA GPU上执行,无论是在本地还是在计算集群环境中。本手稿中报告的结果表明,使用RateML生成的CUDA代码,有可能使用单个图形处理单元为不同模型探索数千种参数组合,大大减少了脑网络模型的参数探索时间和资源使用,进而加速了研究工作流程本身。这提供了一种新工具,可用于创建高效且更广泛的参数拟合工作流程,支持对更大队列的研究,并得出关于脑动力学的更稳健且具有统计学相关性的结论。