• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有神经可塑性的深度液态机器用于视频活动识别

Deep Liquid State Machines With Neural Plasticity for Video Activity Recognition.

作者信息

Soures Nicholas, Kudithipudi Dhireesha

机构信息

Neuromorphic AI Laboratory, Rochester Institute of Technology, Rochester, NY, United States.

出版信息

Front Neurosci. 2019 Jul 4;13:686. doi: 10.3389/fnins.2019.00686. eCollection 2019.

DOI:10.3389/fnins.2019.00686
PMID:31333404
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6621912/
Abstract

Real-world applications such as first-person video activity recognition require intelligent edge devices. However, size, weight, and power constraints of the embedded platforms cannot support resource intensive state-of-the-art algorithms. Machine learning lite algorithms, such as reservoir computing, with shallow 3-layer networks are computationally frugal as only the output layer is trained. By reducing network depth and plasticity, reservoir computing minimizes computational power and complexity, making the algorithms optimal for edge devices. However, as a trade-off for their frugal nature, reservoir computing sacrifices computational power compared to state-of-the-art methods. A good compromise between reservoir computing and fully supervised networks are the proposed deep-LSM networks. The deep-LSM is a deep spiking neural network which captures dynamic information over multiple time-scales with a combination of randomly connected layers and unsupervised layers. The deep-LSM processes the captured dynamic information through an attention modulated readout layer to perform classification. We demonstrate that the deep-LSM achieves an average of 84.78% accuracy on the DogCentric video activity recognition task, beating state-of-the-art. The deep-LSM also shows up to 91.13% memory savings and up to 91.55% reduction in synaptic operations when compared to similar recurrent neural network models. Based on these results we claim that the deep-LSM is capable of overcoming limitations of traditional reservoir computing, while maintaining the low computational cost associated with reservoir computing.

摘要

诸如第一人称视频活动识别等实际应用需要智能边缘设备。然而,嵌入式平台的尺寸、重量和功率限制无法支持资源密集型的先进算法。机器学习轻量级算法,如具有浅三层网络的储层计算,计算量较小,因为只训练输出层。通过减少网络深度和可塑性,储层计算将计算能力和复杂度降至最低,使这些算法成为边缘设备的理想选择。然而,作为其节俭特性的一种权衡,与先进方法相比,储层计算牺牲了计算能力。储层计算和全监督网络之间的一个良好折衷方案是所提出的深度LSM网络。深度LSM是一种深度脉冲神经网络,它通过随机连接层和无监督层的组合在多个时间尺度上捕获动态信息。深度LSM通过注意力调制读出层处理捕获的动态信息以进行分类。我们证明,深度LSM在以狗为中心的视频活动识别任务上平均准确率达到84.78%,超过了现有技术。与类似的递归神经网络模型相比,深度LSM还显示出高达91.13%的内存节省和高达91.55%的突触操作减少。基于这些结果,我们声称深度LSM能够克服传统储层计算的局限性,同时保持与储层计算相关的低计算成本。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/d666dfa51089/fnins-13-00686-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/607a4792ddc1/fnins-13-00686-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/26ea48a90122/fnins-13-00686-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/41558665d552/fnins-13-00686-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/650c2cab6439/fnins-13-00686-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/df60f56c129e/fnins-13-00686-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/fb152904f685/fnins-13-00686-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/d666dfa51089/fnins-13-00686-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/607a4792ddc1/fnins-13-00686-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/26ea48a90122/fnins-13-00686-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/41558665d552/fnins-13-00686-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/650c2cab6439/fnins-13-00686-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/df60f56c129e/fnins-13-00686-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/fb152904f685/fnins-13-00686-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/d666dfa51089/fnins-13-00686-g0007.jpg

相似文献

1
Deep Liquid State Machines With Neural Plasticity for Video Activity Recognition.具有神经可塑性的深度液态机器用于视频活动识别
Front Neurosci. 2019 Jul 4;13:686. doi: 10.3389/fnins.2019.00686. eCollection 2019.
2
Extended liquid state machines for speech recognition.用于语音识别的扩展液态机器。
Front Neurosci. 2022 Oct 28;16:1023470. doi: 10.3389/fnins.2022.1023470. eCollection 2022.
3
Reservoir based spiking models for univariate Time Series Classification.用于单变量时间序列分类的基于蓄水池的脉冲神经网络模型
Front Comput Neurosci. 2023 Jun 8;17:1148284. doi: 10.3389/fncom.2023.1148284. eCollection 2023.
4
Recent advances in physical reservoir computing: A review.近期物理存储计算的进展:综述。
Neural Netw. 2019 Jul;115:100-123. doi: 10.1016/j.neunet.2019.03.005. Epub 2019 Mar 20.
5
Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks.用于时空分类任务的SpiNNaker上的液态机器
Front Neurosci. 2022 Mar 14;16:819063. doi: 10.3389/fnins.2022.819063. eCollection 2022.
6
Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.用于神经形态电路和人工智能应用的忆阻器
Materials (Basel). 2020 Feb 20;13(4):938. doi: 10.3390/ma13040938.
7
A Digital Liquid State Machine With Biologically Inspired Learning and Its Application to Speech Recognition.一种具有生物启发式学习的数字液体状态机及其在语音识别中的应用。
IEEE Trans Neural Netw Learn Syst. 2015 Nov;26(11):2635-49. doi: 10.1109/TNNLS.2015.2388544. Epub 2015 Jan 27.
8
Computational Efficiency of a Modular Reservoir Network for Image Recognition.用于图像识别的模块化储层网络的计算效率
Front Comput Neurosci. 2021 Feb 5;15:594337. doi: 10.3389/fncom.2021.594337. eCollection 2021.
9
Event-driven implementation of deep spiking convolutional neural networks for supervised classification using the SpiNNaker neuromorphic platform.基于 SpiNNaker 神经形态平台的用于监督分类的深度尖峰卷积神经网络的事件驱动实现。
Neural Netw. 2020 Jan;121:319-328. doi: 10.1016/j.neunet.2019.09.008. Epub 2019 Sep 24.
10
Effects of synaptic connectivity on liquid state machine performance.突触连接对液体状态机性能的影响。
Neural Netw. 2013 Feb;38:39-51. doi: 10.1016/j.neunet.2012.11.003. Epub 2012 Nov 17.

引用本文的文献

1
Reinforced liquid state machines-new training strategies for spiking neural networks based on reinforcements.强化液态机器——基于强化的脉冲神经网络新训练策略
Front Comput Neurosci. 2025 May 23;19:1569374. doi: 10.3389/fncom.2025.1569374. eCollection 2025.
2
Direct training high-performance deep spiking neural networks: a review of theories and methods.直接训练高性能深度脉冲神经网络:理论与方法综述
Front Neurosci. 2024 Jul 31;18:1383844. doi: 10.3389/fnins.2024.1383844. eCollection 2024.
3
Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks.

本文引用的文献

1
A Novel Energy-Efficient Approach for Human Activity Recognition.一种新颖的节能型人体活动识别方法。
Sensors (Basel). 2017 Sep 8;17(9):2064. doi: 10.3390/s17092064.
2
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.事件驱动的随机反向传播:助力神经形态深度学习机器
Front Neurosci. 2017 Jun 21;11:324. doi: 10.3389/fnins.2017.00324. eCollection 2017.
3
Optimal Degrees of Synaptic Connectivity.突触连接的最佳程度
递归尖峰神经网络的自适应结构进化和具有生物合理性的突触可塑性。
Sci Rep. 2023 Oct 7;13(1):16924. doi: 10.1038/s41598-023-43488-x.
4
Heterogeneous recurrent spiking neural network for spatio-temporal classification.用于时空分类的异构递归脉冲神经网络
Front Neurosci. 2023 Jan 30;17:994517. doi: 10.3389/fnins.2023.994517. eCollection 2023.
5
Extended liquid state machines for speech recognition.用于语音识别的扩展液态机器。
Front Neurosci. 2022 Oct 28;16:1023470. doi: 10.3389/fnins.2022.1023470. eCollection 2022.
6
Heterogeneous Ensemble-Based Spike-Driven Few-Shot Online Learning.基于异构集成的尖峰驱动少样本在线学习
Front Neurosci. 2022 May 9;16:850932. doi: 10.3389/fnins.2022.850932. eCollection 2022.
7
Computational Efficiency of a Modular Reservoir Network for Image Recognition.用于图像识别的模块化储层网络的计算效率
Front Comput Neurosci. 2021 Feb 5;15:594337. doi: 10.3389/fncom.2021.594337. eCollection 2021.
Neuron. 2017 Mar 8;93(5):1153-1164.e7. doi: 10.1016/j.neuron.2017.01.030. Epub 2017 Feb 16.
4
An Online Structural Plasticity Rule for Generating Better Reservoirs.一种用于生成更好储层的在线结构可塑性规则。
Neural Comput. 2016 Nov;28(11):2557-2584. doi: 10.1162/NECO_a_00886. Epub 2016 Sep 14.
5
Unsupervised learning of digit recognition using spike-timing-dependent plasticity.使用基于脉冲时间依赖可塑性的无监督数字识别学习。
Front Comput Neurosci. 2015 Aug 3;9:99. doi: 10.3389/fncom.2015.00099. eCollection 2015.
6
Homeostatic Plasticity and STDP: Keeping a Neuron's Cool in a Fluctuating World.内稳态可塑性和 STDP:在波动的世界中保持神经元的冷静。
Front Synaptic Neurosci. 2010 Jun 7;2:5. doi: 10.3389/fnsyn.2010.00005. eCollection 2010.
7
The other side of the engram: experience-driven changes in neuronal intrinsic excitability.记忆痕迹的另一面:经验驱动的神经元内在兴奋性变化。
Nat Rev Neurosci. 2003 Nov;4(11):885-900. doi: 10.1038/nrn1248.
8
Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks.通过异质性皮层网络中的稳态突触缩放实现强大的空间工作记忆。
Neuron. 2003 May 8;38(3):473-85. doi: 10.1016/s0896-6273(03)00255-1.
9
Real-time computing without stable states: a new framework for neural computation based on perturbations.无稳定状态的实时计算:基于扰动的神经计算新框架。
Neural Comput. 2002 Nov;14(11):2531-60. doi: 10.1162/089976602760407955.
10
Differential signaling via the same axon of neocortical pyramidal neurons.通过新皮层锥体神经元的同一轴突进行差异信号传导。
Proc Natl Acad Sci U S A. 1998 Apr 28;95(9):5323-8. doi: 10.1073/pnas.95.9.5323.