Dept. Neurology, Charité - University Medicine, Berlin, Germany; Bernstein Focus State Dependencies of Learning, Bernstein Center for Computational Neuroscience, Berlin, Germany.
Institut de Neurosciences des Systèmes UMR INSERM 1106, Aix-Marseille Université Faculté de Médecine, Marseille, France.
Neuroimage. 2015 Aug 15;117:343-57. doi: 10.1016/j.neuroimage.2015.03.055. Epub 2015 Mar 31.
Large amounts of multimodal neuroimaging data are acquired every year worldwide. In order to extract high-dimensional information for computational neuroscience applications standardized data fusion and efficient reduction into integrative data structures are required. Such self-consistent multimodal data sets can be used for computational brain modeling to constrain models with individual measurable features of the brain, such as done with The Virtual Brain (TVB). TVB is a simulation platform that uses empirical structural and functional data to build full brain models of individual humans. For convenient model construction, we developed a processing pipeline for structural, functional and diffusion-weighted magnetic resonance imaging (MRI) and optionally electroencephalography (EEG) data. The pipeline combines several state-of-the-art neuroinformatics tools to generate subject-specific cortical and subcortical parcellations, surface-tessellations, structural and functional connectomes, lead field matrices, electrical source activity estimates and region-wise aggregated blood oxygen level dependent (BOLD) functional MRI (fMRI) time-series. The output files of the pipeline can be directly uploaded to TVB to create and simulate individualized large-scale network models that incorporate intra- and intercortical interaction on the basis of cortical surface triangulations and white matter tractograpy. We detail the pitfalls of the individual processing streams and discuss ways of validation. With the pipeline we also introduce novel ways of estimating the transmission strengths of fiber tracts in whole-brain structural connectivity (SC) networks and compare the outcomes of different tractography or parcellation approaches. We tested the functionality of the pipeline on 50 multimodal data sets. In order to quantify the robustness of the connectome extraction part of the pipeline we computed several metrics that quantify its rescan reliability and compared them to other tractography approaches. Together with the pipeline we present several principles to guide future efforts to standardize brain model construction. The code of the pipeline and the fully processed data sets are made available to the public via The Virtual Brain website (thevirtualbrain.org) and via github (https://github.com/BrainModes/TVB-empirical-data-pipeline). Furthermore, the pipeline can be directly used with High Performance Computing (HPC) resources on the Neuroscience Gateway Portal (http://www.nsgportal.org) through a convenient web-interface.
每年在全球范围内都会获取大量的多模态神经影像学数据。为了提取高维信息以应用于计算神经科学,需要标准化的数据融合和高效的数据简化为综合数据结构。这种自洽的多模态数据集可用于计算大脑建模,以利用大脑的个体可测量特征来约束模型,例如使用虚拟大脑(TVB)所做的那样。TVB 是一个仿真平台,它使用经验性的结构和功能数据来构建个体人类的全脑模型。为了方便模型构建,我们开发了一个用于结构、功能和弥散加权磁共振成像(MRI)以及可选的脑电图(EEG)数据的处理管道。该管道结合了几个最先进的神经信息学工具,以生成特定于主题的皮质和皮质下分割、表面镶嵌、结构和功能连接组、引导场矩阵、电源活动估计和区域汇总的血氧水平依赖(BOLD)功能磁共振成像(fMRI)时间序列。管道的输出文件可以直接上传到 TVB,以基于皮质表面三角剖分和白质束成像创建和模拟包含皮质内和皮质间相互作用的个体化大规模网络模型。我们详细说明了各个处理流程的缺陷,并讨论了验证方法。通过该管道,我们还引入了估计全脑结构连接(SC)网络中纤维束传输强度的新方法,并比较了不同的束追踪或分割方法的结果。我们在 50 个多模态数据集上测试了该管道的功能。为了量化管道中连接组提取部分的稳健性,我们计算了几个度量来量化其重新扫描可靠性,并将其与其他束追踪方法进行了比较。我们与该管道一起提出了一些原则,以指导未来标准化大脑模型构建的工作。该管道的代码和完全处理过的数据集可通过虚拟大脑网站(thevirtualbrain.org)和 github(https://github.com/BrainModes/TVB-empirical-data-pipeline)获得。此外,通过方便的 Web 界面,该管道可以直接在神经科学门户(http://www.nsgportal.org)上的高性能计算(HPC)资源上使用。