• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

分布式脉冲神经网络模拟中的通信稀疏性以提高可扩展性

Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability.

作者信息

Fernandez-Musoles Carlos, Coca Daniel, Richmond Paul

机构信息

Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom.

Computer Science, University of Sheffield, Sheffield, United Kingdom.

出版信息

Front Neuroinform. 2019 Apr 2;13:19. doi: 10.3389/fninf.2019.00019. eCollection 2019.

DOI:10.3389/fninf.2019.00019
PMID:31001102
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6454199/
Abstract

In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a . Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network.

摘要

在过去十年中,对全面理解大脑功能感兴趣的大型科学项目数量激增,这些项目使用脉冲神经网络(SNN)模拟来辅助发现和实验。这种方法增加了对SNN模拟器的计算需求:如果要实现自然规模的大脑大小模拟,就有必要使用并行和分布式计算模型。通信被认为是分布式SNN模拟的主要部分。随着计算节点数量的增加,模拟在有用计算中花费的时间比例(计算效率)会降低,因此对可扩展性施加了限制。这项工作针对通信的三个阶段,以提高分布式模拟中的整体计算效率:隐式同步、进程握手和数据交换。我们通过将SNN建模为一个……来引入一种将神经元连接感知分配到计算节点的方法。对超图进行分区以减少进程间通信会增加通信图的稀疏性。我们提出动态稀疏交换作为对稀疏通信上简单点对点交换的改进。结果表明,使用基于超图的分配和动态稀疏通信时会有综合增益,计算效率提高多达40.8个百分点,模拟时间减少多达73%。这些发现适用于其他分布式复杂系统模拟,其中通信被建模为图网络。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/e98582dac4ce/fninf-13-00019-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/3bdcd2e2e3ca/fninf-13-00019-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/1bee9715b47c/fninf-13-00019-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/0d6c50da0a05/fninf-13-00019-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/d97f8b88ec59/fninf-13-00019-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/56c57fc85b1b/fninf-13-00019-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/1c73f14550d2/fninf-13-00019-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/e33419efba49/fninf-13-00019-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/1f771ba84cdb/fninf-13-00019-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/4709ab87cb89/fninf-13-00019-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/9a7190dcd402/fninf-13-00019-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/b77ec2dc5044/fninf-13-00019-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/e98582dac4ce/fninf-13-00019-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/3bdcd2e2e3ca/fninf-13-00019-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/1bee9715b47c/fninf-13-00019-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/0d6c50da0a05/fninf-13-00019-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/d97f8b88ec59/fninf-13-00019-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/56c57fc85b1b/fninf-13-00019-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/1c73f14550d2/fninf-13-00019-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/e33419efba49/fninf-13-00019-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/1f771ba84cdb/fninf-13-00019-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/4709ab87cb89/fninf-13-00019-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/9a7190dcd402/fninf-13-00019-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/b77ec2dc5044/fninf-13-00019-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c71b/6454199/e98582dac4ce/fninf-13-00019-g0012.jpg

相似文献

1
Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability.分布式脉冲神经网络模拟中的通信稀疏性以提高可扩展性
Front Neuroinform. 2019 Apr 2;13:19. doi: 10.3389/fninf.2019.00019. eCollection 2019.
2
Large-Scale Simulation of a Layered Cortical Sheet of Spiking Network Model Using a Tile Partitioning Method.使用平铺分区方法对尖峰网络模型的分层皮质片进行大规模模拟。
Front Neuroinform. 2019 Nov 29;13:71. doi: 10.3389/fninf.2019.00071. eCollection 2019.
3
Efficient Communication in Distributed Simulations of Spiking Neuronal Networks With Gap Junctions.具有缝隙连接的脉冲神经网络分布式模拟中的高效通信
Front Neuroinform. 2020 May 5;14:12. doi: 10.3389/fninf.2020.00012. eCollection 2020.
4
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.极具扩展性的脉冲神经网络模拟代码:从笔记本电脑到百亿亿次计算机
Front Neuroinform. 2018 Feb 16;12:2. doi: 10.3389/fninf.2018.00002. eCollection 2018.
5
A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors.一种可配置的模拟环境,用于在图形处理器上高效模拟大规模脉冲神经网络。
Neural Netw. 2009 Jul-Aug;22(5-6):791-800. doi: 10.1016/j.neunet.2009.06.028. Epub 2009 Jul 2.
6
Hierarchical Network Connectivity and Partitioning for Reconfigurable Large-Scale Neuromorphic Systems.用于可重构大规模神经形态系统的层次网络连接与划分
Front Neurosci. 2022 Jan 31;15:797654. doi: 10.3389/fnins.2021.797654. eCollection 2021.
7
Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring.通过冯·诺依曼瓶颈疏导大脑信息流:并行排序与重构
Front Neuroinform. 2022 Mar 1;15:785068. doi: 10.3389/fninf.2021.785068. eCollection 2021.
8
Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure.在高性能计算基础设施上部署和优化大规模脉冲神经网络的具身模拟
Front Neuroinform. 2022 May 19;16:884180. doi: 10.3389/fninf.2022.884180. eCollection 2022.
9
SWsnn: A Novel Simulator for Spiking Neural Networks.SWsnn:一种新型尖峰神经网络模拟器。
J Comput Biol. 2023 Sep;30(9):951-960. doi: 10.1089/cmb.2023.0098. Epub 2023 Aug 16.
10
ReplaceNet: real-time replacement of a biological neural circuit with a hardware-assisted spiking neural network.ReplaceNet:用硬件辅助的脉冲神经网络实时替换生物神经回路。
Front Neurosci. 2023 Aug 10;17:1161592. doi: 10.3389/fnins.2023.1161592. eCollection 2023.

引用本文的文献

1
Hierarchical Network Connectivity and Partitioning for Reconfigurable Large-Scale Neuromorphic Systems.用于可重构大规模神经形态系统的层次网络连接与划分
Front Neurosci. 2022 Jan 31;15:797654. doi: 10.3389/fnins.2021.797654. eCollection 2021.

本文引用的文献

1
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.极具扩展性的脉冲神经网络模拟代码:从笔记本电脑到百亿亿次计算机
Front Neuroinform. 2018 Feb 16;12:2. doi: 10.3389/fninf.2018.00002. eCollection 2018.
2
Multi-scale account of the network structure of macaque visual cortex.猴视觉皮层网络结构的多尺度描述。
Brain Struct Funct. 2018 Apr;223(3):1409-1435. doi: 10.1007/s00429-017-1554-4. Epub 2017 Nov 16.
3
China Brain Project: Basic Neuroscience, Brain Diseases, and Brain-Inspired Computing.
中国脑计划:基础神经科学、脑疾病和类脑计算。
Neuron. 2016 Nov 2;92(3):591-596. doi: 10.1016/j.neuron.2016.10.050.
4
The Human Brain Project: Creating a European Research Infrastructure to Decode the Human Brain.人类脑计划:创建一个解码人类大脑的欧洲研究基础设施。
Neuron. 2016 Nov 2;92(3):574-581. doi: 10.1016/j.neuron.2016.10.046.
5
Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON.用于推进脑研究的仿真神经技术:在 NEURON 中并行大型网络。
Neural Comput. 2016 Oct;28(10):2063-90. doi: 10.1162/NECO_a_00876. Epub 2016 Aug 24.
6
GeNN: a code generation framework for accelerated brain simulations.GeNN:用于加速大脑模拟的代码生成框架。
Sci Rep. 2016 Jan 7;6:18854. doi: 10.1038/srep18854.
7
Spiking network simulation code for petascale computers.用于千万亿次级计算机的尖峰神经网络模拟代码。
Front Neuroinform. 2014 Oct 10;8:78. doi: 10.3389/fninf.2014.00078. eCollection 2014.
8
Limits to high-speed simulations of spiking neural networks using general-purpose computers.使用通用计算机对脉冲神经网络进行高速模拟的限制。
Front Neuroinform. 2014 Sep 11;8:76. doi: 10.3389/fninf.2014.00076. eCollection 2014.
9
HRLSim: a high performance spiking neural network simulator for GPGPU clusters.HRLSim:一种用于 GPGPU 集群的高性能尖峰神经网络模拟器。
IEEE Trans Neural Netw Learn Syst. 2014 Feb;25(2):316-31. doi: 10.1109/TNNLS.2013.2276056.
10
A novel CPU/GPU simulation environment for large-scale biologically realistic neural modeling.一种用于大规模生物逼真神经建模的新型 CPU/GPU 模拟环境。
Front Neuroinform. 2013 Oct 2;7:19. doi: 10.3389/fninf.2013.00019. eCollection 2013.