• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于可重构大规模神经形态系统的层次网络连接与划分

Hierarchical Network Connectivity and Partitioning for Reconfigurable Large-Scale Neuromorphic Systems.

作者信息

Mysore Nishant, Hota Gopabandhu, Deiss Stephen R, Pedroni Bruno U, Cauwenberghs Gert

机构信息

Integrated Systems Neuroengineering Laboratory, Department of Bioengineering, University of California, San Diego, La Jolla, CA, United States.

Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, United States.

出版信息

Front Neurosci. 2022 Jan 31;15:797654. doi: 10.3389/fnins.2021.797654. eCollection 2021.

DOI:10.3389/fnins.2021.797654
PMID:35173573
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8842996/
Abstract

We present an efficient and scalable partitioning method for mapping large-scale neural network models with locally dense and globally sparse connectivity onto reconfigurable neuromorphic hardware. Scalability in computational efficiency, i.e., amount of time spent in actual computation, remains a huge challenge in very large networks. Most partitioning algorithms also struggle to address the scalability in network workloads in finding a globally optimal partition and efficiently mapping onto hardware. As communication is regarded as the most energy and time-consuming part of such distributed processing, the partitioning framework is optimized for compute-balanced, memory-efficient parallel processing targeting low-latency execution and dense synaptic storage, with minimal routing across various compute cores. We demonstrate highly scalable and efficient partitioning for connectivity-aware and hierarchical address-event routing resource-optimized mapping, significantly reducing the total communication volume recursively when compared to random balanced assignment. We showcase our results working on synthetic networks with varying degrees of sparsity factor and fan-out, small-world networks, feed-forward networks, and a hemibrain connectome reconstruction of the fruit-fly brain. The combination of our method and practical results suggest a promising path toward extending to very large-scale networks and scalable hardware-aware partitioning.

摘要

我们提出了一种高效且可扩展的分区方法,用于将具有局部密集和全局稀疏连接的大规模神经网络模型映射到可重构神经形态硬件上。计算效率的可扩展性,即在实际计算中花费的时间量,在非常大的网络中仍然是一个巨大的挑战。大多数分区算法在寻找全局最优分区并有效地映射到硬件时,也难以解决网络工作负载的可扩展性问题。由于通信被认为是此类分布式处理中最耗能和最耗时的部分,因此分区框架针对计算平衡、内存高效的并行处理进行了优化,目标是低延迟执行和密集突触存储,同时在各个计算核心之间的路由最少。我们展示了针对连接感知和分层地址事件路由资源优化映射的高度可扩展且高效的分区,与随机平衡分配相比,递归地显著减少了总通信量。我们展示了在具有不同稀疏因子和扇出度的合成网络、小世界网络、前馈网络以及果蝇大脑的半脑连接体重建上的工作结果。我们的方法与实际结果相结合,为扩展到超大规模网络和可扩展的硬件感知分区指明了一条充满希望的道路。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/aa57b583b182/fnins-15-797654-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/1fc7a2e55a9f/fnins-15-797654-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/afb6b28c0612/fnins-15-797654-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/1faebc4f8609/fnins-15-797654-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/692809330f4d/fnins-15-797654-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/2f280303a1ac/fnins-15-797654-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/811a094d2e48/fnins-15-797654-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/ea3f5649231a/fnins-15-797654-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/a552de2b8dcd/fnins-15-797654-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/aa57b583b182/fnins-15-797654-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/1fc7a2e55a9f/fnins-15-797654-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/afb6b28c0612/fnins-15-797654-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/1faebc4f8609/fnins-15-797654-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/692809330f4d/fnins-15-797654-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/2f280303a1ac/fnins-15-797654-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/811a094d2e48/fnins-15-797654-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/ea3f5649231a/fnins-15-797654-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/a552de2b8dcd/fnins-15-797654-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3238/8842996/aa57b583b182/fnins-15-797654-g0009.jpg

相似文献

1
Hierarchical Network Connectivity and Partitioning for Reconfigurable Large-Scale Neuromorphic Systems.用于可重构大规模神经形态系统的层次网络连接与划分
Front Neurosci. 2022 Jan 31;15:797654. doi: 10.3389/fnins.2021.797654. eCollection 2021.
2
Mosaic: in-memory computing and routing for small-world spike-based neuromorphic systems.Mosaic:面向基于尖峰的小世界神经形态系统的内存计算与路由
Nat Commun. 2024 Jan 2;15(1):142. doi: 10.1038/s41467-023-44365-x.
3
Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms.比较实际应用中的神经形态解决方案:在三个并行计算平台上为基准分类任务实现一种受生物启发的解决方案。
Front Neurosci. 2016 Jan 8;9:491. doi: 10.3389/fnins.2015.00491. eCollection 2015.
4
Scalable Digital Neuromorphic Architecture for Large-Scale Biophysically Meaningful Neural Network With Multi-Compartment Neurons.可扩展的数字神经形态架构,用于具有多腔神经元的大规模生物物理意义神经网络。
IEEE Trans Neural Netw Learn Syst. 2020 Jan;31(1):148-162. doi: 10.1109/TNNLS.2019.2899936. Epub 2019 Mar 18.
5
A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.一种基于可重构神经形态硬件的散射与聚集脉冲卷积神经网络。
Front Neurosci. 2021 Nov 16;15:694170. doi: 10.3389/fnins.2021.694170. eCollection 2021.
6
EdgeMap: An Optimized Mapping Toolchain for Spiking Neural Network in Edge Computing.EdgeMap:一种用于边缘计算中脉冲神经网络的优化映射工具链。
Sensors (Basel). 2023 Jul 20;23(14):6548. doi: 10.3390/s23146548.
7
Programming time-multiplexed reconfigurable hardware using a scalable neuromorphic compiler.使用可扩展神经形态编译器对时分复用可重构硬件进行编程。
IEEE Trans Neural Netw Learn Syst. 2012 Jun;23(6):889-901. doi: 10.1109/TNNLS.2012.2191795.
8
Spatially Arranged Sparse Recurrent Neural Networks for Energy Efficient Associative Memory.用于节能联想记忆的空间排列稀疏循环神经网络。
IEEE Trans Neural Netw Learn Syst. 2020 Jan;31(1):24-38. doi: 10.1109/TNNLS.2019.2899344. Epub 2019 Mar 15.
9
Optimizing event-based neural networks on digital neuromorphic architecture: a comprehensive design space exploration.在数字神经形态架构上优化基于事件的神经网络:全面的设计空间探索
Front Neurosci. 2024 Mar 28;18:1335422. doi: 10.3389/fnins.2024.1335422. eCollection 2024.
10
Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems.分层地址事件路由在可重构大规模神经形态系统中的应用。
IEEE Trans Neural Netw Learn Syst. 2017 Oct;28(10):2408-2422. doi: 10.1109/TNNLS.2016.2572164. Epub 2016 Jul 29.

引用本文的文献

1
Scalable network emulation on analog neuromorphic hardware.模拟神经形态硬件上的可扩展网络仿真。
Front Neurosci. 2025 Feb 5;18:1523331. doi: 10.3389/fnins.2024.1523331. eCollection 2024.

本文引用的文献

1
A connectome and analysis of the adult central brain.一个成年中枢大脑的连接组和分析。
Elife. 2020 Sep 7;9:e57443. doi: 10.7554/eLife.57443.
2
Deep Spiking Neural Networks for Large Vocabulary Automatic Speech Recognition.用于大词汇量自动语音识别的深度脉冲神经网络。
Front Neurosci. 2020 Mar 17;14:199. doi: 10.3389/fnins.2020.00199. eCollection 2020.
3
Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability.分布式脉冲神经网络模拟中的通信稀疏性以提高可扩展性
Front Neuroinform. 2019 Apr 2;13:19. doi: 10.3389/fninf.2019.00019. eCollection 2019.
4
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
5
Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning.通过基于STDP的无监督预训练和监督微调来训练深度脉冲卷积神经网络
Front Neurosci. 2018 Aug 3;12:435. doi: 10.3389/fnins.2018.00435. eCollection 2018.
6
Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems.分层地址事件路由在可重构大规模神经形态系统中的应用。
IEEE Trans Neural Netw Learn Syst. 2017 Oct;28(10):2408-2422. doi: 10.1109/TNNLS.2016.2572164. Epub 2016 Jul 29.
7
Neuroscience thinks big (and collaboratively).神经科学想得很大(而且是合作地想)。
Nat Rev Neurosci. 2013 Sep;14(9):659-64. doi: 10.1038/nrn3578.
8
The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model.细胞类型特异性皮质微电路:在全尺度尖峰网络模型中关联结构和活动。
Cereb Cortex. 2014 Mar;24(3):785-806. doi: 10.1093/cercor/bhs358. Epub 2012 Dec 2.
9
Small-world brain networks.小世界脑网络。
Neuroscientist. 2006 Dec;12(6):512-23. doi: 10.1177/1073858406293182.