Suppr超能文献

用于千万亿次级计算机的尖峰神经网络模拟代码。

Spiking network simulation code for petascale computers.

机构信息

Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Centre Jülich, Germany ; Programming Environment Research Team, RIKEN Advanced Institute for Computational Science Kobe, Japan.

Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany.

出版信息

Front Neuroinform. 2014 Oct 10;8:78. doi: 10.3389/fninf.2014.00078. eCollection 2014.

Abstract

Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

摘要

脑尺度网络在其组成部分的动力学性质和参数方面表现出惊人的异质性。在细胞分辨率下,理论的实体是神经元和突触,在过去的十年中,研究人员已经学会了用有效的数据结构来管理神经元和突触的异质性。早期的并行模拟代码已经以分布式的方式存储突触,使得一个突触仅在包含目标神经元的计算节点上消耗内存。随着拥有大约 10 万个节点的千万亿次级计算机越来越多地应用于神经科学领域,神经元网络模拟软件面临着新的挑战:每个神经元与大约 10000 个其他神经元相连接,因此只有一小部分计算节点是目标节点;此外,对于任何给定的源神经元,在任何给定的计算节点上通常只有一个突触被创建。从单个计算节点的角度来看,突触目标列表的异质性沿着两个维度崩溃:突触类型的维度和给定类型的突触数量的维度。在这里,我们使用元编程技术提出了一种利用这种双重崩溃的结构。在介绍脑尺度模拟的相关扩展场景之后,我们在两台超级计算机上对其性能进行了定量讨论。我们表明,这种新的架构可以扩展到当今可用的最大千万亿次级超级计算机。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0afb/4193238/07c43f5d310d/fninf-08-00078-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验