Suppr超能文献

分布式脉冲神经网络模拟中的通信稀疏性以提高可扩展性

Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability.

作者信息

Fernandez-Musoles Carlos, Coca Daniel, Richmond Paul

机构信息

Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom.

Computer Science, University of Sheffield, Sheffield, United Kingdom.

出版信息

Front Neuroinform. 2019 Apr 2;13:19. doi: 10.3389/fninf.2019.00019. eCollection 2019.

Abstract

In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a . Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network.

摘要

在过去十年中,对全面理解大脑功能感兴趣的大型科学项目数量激增,这些项目使用脉冲神经网络(SNN)模拟来辅助发现和实验。这种方法增加了对SNN模拟器的计算需求:如果要实现自然规模的大脑大小模拟,就有必要使用并行和分布式计算模型。通信被认为是分布式SNN模拟的主要部分。随着计算节点数量的增加,模拟在有用计算中花费的时间比例(计算效率)会降低,因此对可扩展性施加了限制。这项工作针对通信的三个阶段,以提高分布式模拟中的整体计算效率:隐式同步、进程握手和数据交换。我们通过将SNN建模为一个……来引入一种将神经元连接感知分配到计算节点的方法。对超图进行分区以减少进程间通信会增加通信图的稀疏性。我们提出动态稀疏交换作为对稀疏通信上简单点对点交换的改进。结果表明,使用基于超图的分配和动态稀疏通信时会有综合增益,计算效率提高多达40.8个百分点,模拟时间减少多达73%。这些发现适用于其他分布式复杂系统模拟,其中通信被建模为图网络。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/3bdcd2e2e3ca/fninf-13-00019-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验