Pronold Jari, Jordan Jakob, Wylie Brian J N, Kitayama Itaru, Diesmann Markus, Kunkel Susanne
Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.
RWTH Aachen University, Aachen, Germany.
Front Neuroinform. 2022 Mar 1;15:785068. doi: 10.3389/fninf.2021.785068. eCollection 2021.
Generic simulation code for spiking neuronal networks spends the major part of the time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size, a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here, we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm, all threads search through all spikes to pick out the relevant ones. With increasing network size, the fraction of hits remains invariant but the absolute number of rejections grows. Our new alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this, every thread completes delivery solely of the section of spikes for its own neurons. Independent of the number of threads, all spikes are looked at only two times. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40 %. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency that the instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching.
用于脉冲神经网络的通用模拟代码,其大部分时间都花费在脉冲到达计算节点并需要传递到目标神经元的阶段。这些脉冲是在通信步骤之间的最后一个时间间隔内由分布在许多计算节点上的源神经元发出的,并且就其目标而言本质上是不规则且未排序的。为了找到这些目标,脉冲需要被调度到一个三维数据结构中,并在途中决定目标线程和突触类型。随着网络规模的增长,一个计算节点从越来越多不同的源神经元接收脉冲,直到在极限情况下,计算节点上的每个突触都有一个唯一的源。在这里,我们通过分析展示了在从十万到十亿神经元的实际相关网络规模范围内,这种稀疏性是如何出现的。通过对生产代码进行剖析,我们研究了算法更改的机会,以避免间接寻址和分支。每个线程在计算节点上承载相等份额的神经元。在原始算法中,所有线程都搜索所有脉冲以挑选出相关的脉冲。随着网络规模的增加,命中的比例保持不变,但拒绝的绝对数量会增加。我们新的替代算法将脉冲均匀地分配到线程之间,并立即根据目标线程和突触类型进行并行排序。在此之后,每个线程仅完成其自身神经元的脉冲段的传递。与线程数量无关,所有脉冲只被查看两次。新算法将脉冲传递中的指令数量减少了一半,从而使模拟时间最多减少40%。因此,脉冲传递是一个具有单个同步点的完全可并行化过程,因此非常适合多核系统。我们的分析表明,进一步的进展需要减少指令访问内存时的延迟。该研究为探索诸如软件流水线和软件诱导预取等延迟隐藏方法奠定了基础。