• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

从具有稀疏连接的回声状态网络的外部权重转移到内部权重。

Transferring learning from external to internal weights in echo-state networks with sparse connectivity.

机构信息

Department of Electrical Engineering, Stanford University, Stanford, California, United States of America.

出版信息

PLoS One. 2012;7(5):e37372. doi: 10.1371/journal.pone.0037372. Epub 2012 May 24.

DOI:10.1371/journal.pone.0037372
PMID:22655041
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3360031/
Abstract

Modifying weights within a recurrent network to improve performance on a task has proven to be difficult. Echo-state networks in which modification is restricted to the weights of connections onto network outputs provide an easier alternative, but at the expense of modifying the typically sparse architecture of the network by including feedback from the output back into the network. We derive methods for using the values of the output weights from a trained echo-state network to set recurrent weights within the network. The result of this "transfer of learning" is a recurrent network that performs the task without requiring the output feedback present in the original network. We also discuss a hybrid version in which online learning is applied to both output and recurrent weights. Both approaches provide efficient ways of training recurrent networks to perform complex tasks. Through an analysis of the conditions required to make transfer of learning work, we define the concept of a "self-sensing" network state, and we compare and contrast this with compressed sensing.

摘要

在递归网络中修改权重以提高任务性能已被证明是困难的。其中修改仅限于连接到网络输出的权重的回声状态网络提供了一个更容易的替代方案,但以通过将来自输出的反馈包括回网络中来修改网络的典型稀疏结构为代价。我们导出了从训练有素的回声状态网络的输出权重值设置网络内递归权重的方法。这种“学习转移”的结果是一个递归网络,它执行任务而不需要原始网络中存在的输出反馈。我们还讨论了一种混合版本,其中在线学习应用于输出和递归权重。这两种方法都提供了训练递归网络以执行复杂任务的有效方法。通过分析使学习转移起作用所需的条件,我们定义了“自感知”网络状态的概念,并将其与压缩感知进行了比较和对比。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/96709077bf81/pone.0037372.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/d3511b79596d/pone.0037372.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/8b3ec08b7ca3/pone.0037372.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/b146ae179c2d/pone.0037372.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/96709077bf81/pone.0037372.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/d3511b79596d/pone.0037372.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/8b3ec08b7ca3/pone.0037372.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/b146ae179c2d/pone.0037372.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/96709077bf81/pone.0037372.g004.jpg

相似文献

1
Transferring learning from external to internal weights in echo-state networks with sparse connectivity.从具有稀疏连接的回声状态网络的外部权重转移到内部权重。
PLoS One. 2012;7(5):e37372. doi: 10.1371/journal.pone.0037372. Epub 2012 May 24.
2
A Geometrical Analysis of Global Stability in Trained Feedback Networks.训练反馈网络全局稳定性的几何分析。
Neural Comput. 2019 Jun;31(6):1139-1182. doi: 10.1162/neco_a_01187. Epub 2019 Apr 12.
3
Multi-source sequential knowledge regression by using transfer RNN units.基于转移 RNN 单元的多源序列知识回归。
Neural Netw. 2019 Nov;119:151-161. doi: 10.1016/j.neunet.2019.08.004. Epub 2019 Aug 17.
4
A novel time series analysis approach for prediction of dialysis in critically ill patients using echo-state networks.基于回声状态网络的新型时间序列分析方法预测危重症患者的透析。
BMC Med Inform Decis Mak. 2010 Jan 21;10:4. doi: 10.1186/1472-6947-10-4.
5
full-FORCE: A target-based method for training recurrent networks.全强制:一种用于训练循环网络的基于目标的方法。
PLoS One. 2018 Feb 7;13(2):e0191527. doi: 10.1371/journal.pone.0191527. eCollection 2018.
6
Online sequential echo state network with sparse RLS algorithm for time series prediction.基于稀疏 RLS 算法的在线序列回声状态网络时间序列预测。
Neural Netw. 2019 Oct;118:32-42. doi: 10.1016/j.neunet.2019.05.006. Epub 2019 May 29.
7
Local online learning in recurrent networks with random feedback.具有随机反馈的递归网络中的局部在线学习。
Elife. 2019 May 24;8:e43299. doi: 10.7554/eLife.43299.
8
Three learning phases for radial-basis-function networks.径向基函数网络的三个学习阶段。
Neural Netw. 2001 May;14(4-5):439-58. doi: 10.1016/s0893-6080(01)00027-2.
9
A generalized LSTM-like training algorithm for second-order recurrent neural networks.二阶递归神经网络的广义 LSTM 样训练算法。
Neural Netw. 2012 Jan;25(1):70-83. doi: 10.1016/j.neunet.2011.07.003. Epub 2011 Jul 18.
10
On the Post Hoc Explainability of Optimized Self-Organizing Reservoir Network for Action Recognition.优化自组织储层网络的事后可解释性在动作识别中的应用。
Sensors (Basel). 2022 Mar 1;22(5):1905. doi: 10.3390/s22051905.

引用本文的文献

1
Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks.温和地驯服混乱:循环神经网络中的一种预测对齐学习规则。
Nat Commun. 2025 Jul 23;16(1):6784. doi: 10.1038/s41467-025-61309-9.
2
Neural kernels for recursive support vector regression as a model for episodic memory.递归支持向量回归的神经核函数作为情景记忆的模型。
Biol Cybern. 2022 Jun;116(3):377-386. doi: 10.1007/s00422-022-00926-9. Epub 2022 Mar 29.
3
Sample-level sound synthesis with recurrent neural networks and conceptors.基于递归神经网络和概念器的样本级声音合成。

本文引用的文献

1
Stimulus-dependent suppression of chaos in recurrent neural networks.循环神经网络中依赖刺激的混沌抑制
Phys Rev E Stat Nonlin Soft Matter Phys. 2010 Jul;82(1 Pt 1):011903. doi: 10.1103/PhysRevE.82.011903. Epub 2010 Jul 7.
2
Generating coherent patterns of activity from chaotic neural networks.从混沌神经网络中生成连贯的活动模式。
Neuron. 2009 Aug 27;63(4):544-57. doi: 10.1016/j.neuron.2009.07.018.
3
Learning long-term dependencies with gradient descent is difficult.使用梯度下降法学习长期依赖关系是困难的。
PeerJ Comput Sci. 2019 Jul 8;5:e205. doi: 10.7717/peerj-cs.205. eCollection 2019.
4
Constraints of Metabolic Energy on the Number of Synaptic Connections of Neurons and the Density of Neuronal Networks.代谢能量对神经元突触连接数量及神经网络密度的限制
Front Comput Neurosci. 2018 Nov 20;12:91. doi: 10.3389/fncom.2018.00091. eCollection 2018.
5
From statistical inference to a differential learning rule for stochastic neural networks.从统计推断到随机神经网络的差异学习规则。
Interface Focus. 2018 Dec 6;8(6):20180033. doi: 10.1098/rsfs.2018.0033. Epub 2018 Oct 19.
6
Flexibility in motor timing constrains the topology and dynamics of pattern generator circuits.运动时机的灵活性限制了模式发生器电路的拓扑结构和动态特性。
Nat Commun. 2018 Mar 6;9(1):977. doi: 10.1038/s41467-018-03261-5.
7
full-FORCE: A target-based method for training recurrent networks.全强制:一种用于训练循环网络的基于目标的方法。
PLoS One. 2018 Feb 7;13(2):e0191527. doi: 10.1371/journal.pone.0191527. eCollection 2018.
8
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.通过递归尖峰神经网络中的稳定局部学习来预测非线性动力学。
Elife. 2017 Nov 27;6:e28295. doi: 10.7554/eLife.28295.
9
Working Memory Requires a Combination of Transient and Attractor-Dominated Dynamics to Process Unreliably Timed Inputs.工作记忆需要暂态和吸引子主导的动力学组合来处理不可靠定时输入。
Sci Rep. 2017 May 30;7(1):2473. doi: 10.1038/s41598-017-02471-z.
10
Persistent Memory in Single Node Delay-Coupled Reservoir Computing.单节点延迟耦合储层计算中的持久记忆
PLoS One. 2016 Oct 26;11(10):e0165170. doi: 10.1371/journal.pone.0165170. eCollection 2016.
IEEE Trans Neural Netw. 1994;5(2):157-66. doi: 10.1109/72.279181.
4
Computational aspects of feedback in neural circuits.神经回路中反馈的计算方面。
PLoS Comput Biol. 2007 Jan 19;3(1):e165. doi: 10.1371/journal.pcbi.0020165. Epub 2006 Oct 24.
5
Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication.利用非线性:预测混沌系统并在无线通信中节约能源。
Science. 2004 Apr 2;304(5667):78-80. doi: 10.1126/science.1091277.
6
Real-time computing without stable states: a new framework for neural computation based on perturbations.无稳定状态的实时计算:基于扰动的神经计算新框架。
Neural Comput. 2002 Nov;14(11):2531-60. doi: 10.1162/089976602760407955.
7
Chaos in random neural networks.随机神经网络中的混沌现象。
Phys Rev Lett. 1988 Jul 18;61(3):259-262. doi: 10.1103/PhysRevLett.61.259.
8
Temporal information transformed into a spatial code by a neural network with realistic properties.时间信息通过具有现实特性的神经网络转化为空间编码。
Science. 1995 Feb 17;267(5200):1028-30. doi: 10.1126/science.7863330.