• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于时空分类任务的SpiNNaker上的液态机器

Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks.

作者信息

Patiño-Saucedo Alberto, Rostro-González Horacio, Serrano-Gotarredona Teresa, Linares-Barranco Bernabé

机构信息

Department of Electronics Engineering, University of Guanajuato, Salamanca, Mexico.

Instituto de Microelectrónica de Sevilla (IMSE-CNM), Consejo Superior de Investigaciones Científicas (CSIC) and Univ. de Sevilla, Seville, Spain.

出版信息

Front Neurosci. 2022 Mar 14;16:819063. doi: 10.3389/fnins.2022.819063. eCollection 2022.

DOI:10.3389/fnins.2022.819063
PMID:35360182
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8964061/
Abstract

Liquid State Machines (LSMs) are computing reservoirs composed of recurrently connected Spiking Neural Networks which have attracted research interest for their modeling capacity of biological structures and as promising pattern recognition tools suitable for their implementation in neuromorphic processors, benefited from the modest use of computing resources in their training process. However, it has been difficult to optimize LSMs for solving complex tasks such as event-based computer vision and few implementations in large-scale neuromorphic processors have been attempted. In this work, we show that offline-trained LSMs implemented in the SpiNNaker neuromorphic processor are able to classify visual events, achieving state-of-the-art performance in the event-based N-MNIST dataset. The training of the readout layer is performed using a recent adaptation of back-propagation-through-time (BPTT) for SNNs, while the internal weights of the reservoir are kept static. Results show that mapping our LSM from a Deep Learning framework to SpiNNaker does not affect the performance of the classification task. Additionally, we show that weight quantization, which substantially reduces the memory footprint of the LSM, has a small impact on its performance.

摘要

液态机器(LSM)是由循环连接的脉冲神经网络组成的计算单元,因其对生物结构的建模能力以及作为适用于神经形态处理器实现的有前景的模式识别工具而吸引了研究兴趣,这得益于其在训练过程中对计算资源的适度使用。然而,优化LSM以解决诸如基于事件的计算机视觉等复杂任务一直很困难,并且在大规模神经形态处理器中的实现尝试很少。在这项工作中,我们表明在SpiNNaker神经形态处理器中实现的离线训练LSM能够对视觉事件进行分类,在基于事件的N-MNIST数据集中达到了当前的最佳性能。读出层的训练使用了最近针对脉冲神经网络的时间反向传播(BPTT)的一种改编方法,而计算单元的内部权重保持不变。结果表明,将我们的LSM从深度学习框架映射到SpiNNaker不会影响分类任务的性能。此外,我们表明权重量化虽然大幅减少了LSM的内存占用,但对其性能影响较小。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/b809755c7ebb/fnins-16-819063-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/26d95c7a02b4/fnins-16-819063-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/bb691c3d0530/fnins-16-819063-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/81201c7ea8c5/fnins-16-819063-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/b2462146e204/fnins-16-819063-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/c7decc031687/fnins-16-819063-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/8717f910e6fc/fnins-16-819063-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/6058ab4f337c/fnins-16-819063-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/b809755c7ebb/fnins-16-819063-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/26d95c7a02b4/fnins-16-819063-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/bb691c3d0530/fnins-16-819063-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/81201c7ea8c5/fnins-16-819063-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/b2462146e204/fnins-16-819063-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/c7decc031687/fnins-16-819063-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/8717f910e6fc/fnins-16-819063-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/6058ab4f337c/fnins-16-819063-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/90ba/8964061/b809755c7ebb/fnins-16-819063-g0008.jpg

相似文献

1
Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks.用于时空分类任务的SpiNNaker上的液态机器
Front Neurosci. 2022 Mar 14;16:819063. doi: 10.3389/fnins.2022.819063. eCollection 2022.
2
Event-driven implementation of deep spiking convolutional neural networks for supervised classification using the SpiNNaker neuromorphic platform.基于 SpiNNaker 神经形态平台的用于监督分类的深度尖峰卷积神经网络的事件驱动实现。
Neural Netw. 2020 Jan;121:319-328. doi: 10.1016/j.neunet.2019.09.008. Epub 2019 Sep 24.
3
Neuromorphic Sentiment Analysis Using Spiking Neural Networks.基于尖峰神经网络的神经形态情绪分析。
Sensors (Basel). 2023 Sep 6;23(18):7701. doi: 10.3390/s23187701.
4
E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware.SpiNNaker 2上的E-prop:探索神经形态硬件上脉冲循环神经网络中的在线学习。
Front Neurosci. 2022 Nov 28;16:1018006. doi: 10.3389/fnins.2022.1018006. eCollection 2022.
5
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware.基于神经形态硬件的塑性神经网络大规模模拟
Front Neuroanat. 2016 Apr 7;10:37. doi: 10.3389/fnana.2016.00037. eCollection 2016.
6
Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences.在神经形态视觉数据集上比较 SNNs 和 RNNs:相似性和差异。
Neural Netw. 2020 Dec;132:108-120. doi: 10.1016/j.neunet.2020.08.001. Epub 2020 Aug 17.
7
SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition.SpiLinC:用于无监督语音和图像识别的脉冲液体集成计算
Front Neurosci. 2018 Aug 23;12:524. doi: 10.3389/fnins.2018.00524. eCollection 2018.
8
Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture.以突触为中心的皮质模型到SpiNNaker神经形态架构的映射
Front Neurosci. 2016 Sep 14;10:420. doi: 10.3389/fnins.2016.00420. eCollection 2016.
9
Reservoir based spiking models for univariate Time Series Classification.用于单变量时间序列分类的基于蓄水池的脉冲神经网络模型
Front Comput Neurosci. 2023 Jun 8;17:1148284. doi: 10.3389/fncom.2023.1148284. eCollection 2023.
10
A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.一种基于可重构神经形态硬件的散射与聚集脉冲卷积神经网络。
Front Neurosci. 2021 Nov 16;15:694170. doi: 10.3389/fnins.2021.694170. eCollection 2021.

引用本文的文献

1
Neuromorphic algorithms for brain implants: a review.用于脑植入物的神经形态算法:综述
Front Neurosci. 2025 Apr 11;19:1570104. doi: 10.3389/fnins.2025.1570104. eCollection 2025.
2
An accurate and fast learning approach in the biologically spiking neural network.生物脉冲神经网络中一种准确且快速的学习方法。
Sci Rep. 2025 Feb 24;15(1):6585. doi: 10.1038/s41598-025-90113-0.
3
Reservoir based spiking models for univariate Time Series Classification.用于单变量时间序列分类的基于蓄水池的脉冲神经网络模型

本文引用的文献

1
The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks.用于 Spike 神经网络系统评估的海德堡 Spike 数据集。
IEEE Trans Neural Netw Learn Syst. 2022 Jul;33(7):2744-2757. doi: 10.1109/TNNLS.2020.3044364. Epub 2022 Jul 6.
2
Deep learning in spiking neural networks.深度学习在尖峰神经网络中的应用。
Neural Netw. 2019 Mar;111:47-63. doi: 10.1016/j.neunet.2018.12.002. Epub 2018 Dec 18.
3
sPyNNaker: A Software Package for Running PyNN Simulations on SpiNNaker.sPyNNaker:一个用于在SpiNNaker上运行PyNN模拟的软件包。
Front Comput Neurosci. 2023 Jun 8;17:1148284. doi: 10.3389/fncom.2023.1148284. eCollection 2023.
4
and Sparse Binary Coincidence (SBC) memories: Fast, robust learning and inference for neuromorphic architectures.以及稀疏二元巧合(SBC)存储器:用于神经形态架构的快速、稳健学习与推理。
Front Neuroinform. 2023 Mar 21;17:1125844. doi: 10.3389/fninf.2023.1125844. eCollection 2023.
5
Critically synchronized brain waves form an effective, robust and flexible basis for human memory and learning.关键同步脑波为人类记忆和学习提供了有效、强大且灵活的基础。
Sci Rep. 2023 Mar 16;13(1):4343. doi: 10.1038/s41598-023-31365-6.
Front Neurosci. 2018 Nov 20;12:816. doi: 10.3389/fnins.2018.00816. eCollection 2018.
4
Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype.基于SpiNNaker 2原型的内存高效深度学习
Front Neurosci. 2018 Nov 16;12:840. doi: 10.3389/fnins.2018.00840. eCollection 2018.
5
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.
6
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
7
Deep Visual-Semantic Alignments for Generating Image Descriptions.深度视觉-语义对齐生成图像描述。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):664-676. doi: 10.1109/TPAMI.2016.2598339. Epub 2016 Aug 5.
8
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.利用扫视将静态图像数据集转换为脉冲神经形态数据集
Front Neurosci. 2015 Nov 16;9:437. doi: 10.3389/fnins.2015.00437. eCollection 2015.
9
Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.人工大脑。具有可扩展通信网络和接口的 100 万个尖峰神经元集成电路。
Science. 2014 Aug 8;345(6197):668-73. doi: 10.1126/science.1254642. Epub 2014 Aug 7.
10
Framewise phoneme classification with bidirectional LSTM and other neural network architectures.使用双向长短期记忆网络和其他神经网络架构进行逐帧音素分类。
Neural Netw. 2005 Jun-Jul;18(5-6):602-10. doi: 10.1016/j.neunet.2005.06.042.