Blundell Inga, Brette Romain, Cleland Thomas A, Close Thomas G, Coca Daniel, Davison Andrew P, Diaz-Pier Sandra, Fernandez Musoles Carlos, Gleeson Padraig, Goodman Dan F M, Hines Michael, Hopkins Michael W, Kumbhar Pramod, Lester David R, Marin Bóris, Morrison Abigail, Müller Eric, Nowotny Thomas, Peyser Alexander, Plotnikov Dimitri, Richmond Paul, Rowley Andrew, Rumpe Bernhard, Stimberg Marcel, Stokes Alan B, Tomkins Adam, Trensch Guido, Woodman Marmaduke, Eppler Jochen Martin
Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany.
Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.
Front Neuroinform. 2018 Nov 5;12:68. doi: 10.3389/fninf.2018.00068. eCollection 2018.
Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them.
实验技术和计算能力的进步使研究人员能够以前所未有的详细程度收集解剖学和电生理数据,这推动了计算神经科学中日益复杂模型的发展。大规模、生物物理细节丰富的细胞模型带来了一系列特殊的计算挑战,这导致了许多特定领域模拟器的开发。在另一个细节层面,不断增加的点神经元模型种类增加了实现障碍,即使对于那些基于相对简单的积分发放神经元模型的情况也是如此。无论模型的复杂性如何,所有建模方法都关键地依赖于将数学模型描述高效且准确地转换为可有效执行的代码。神经科学家通常根据模型背后的数学方程来发布模型描述。然而,要实际模拟这些模型,就需要将它们翻译成代码。这可能会引发问题,因为如果手动进行这个过程可能会引入错误,而且神经科学家编写的代码在计算效率上可能不高。此外,生成的翻译代码可能是针对不同的硬件平台、操作系统变体,甚至是用不同的语言编写的,因此不容易进行组合甚至比较。解决这个问题主要有两种方法。第一种是将用户限制在一组固定的优化模型中,这限制了灵活性。第二种是允许用高级解释语言进行模型定义,尽管这可能会限制性能。最近,第三种方法越来越受欢迎:使用代码生成将高级描述自动翻译成高效的低级代码,以结合前两种方法的优点。这种方法还极大地丰富了标准化与模拟器无关的模型描述语言的努力。在过去几年里,计算神经科学界已经开发了许多代码生成管道,它们在目标、范围和功能上有很大差异。本文概述了该社区目前使用的现有管道,并对比了它们的能力以及背后的技术和概念。