Suppr超能文献

从具有稀疏连接的回声状态网络的外部权重转移到内部权重。

Transferring learning from external to internal weights in echo-state networks with sparse connectivity.

机构信息

Department of Electrical Engineering, Stanford University, Stanford, California, United States of America.

出版信息

PLoS One. 2012;7(5):e37372. doi: 10.1371/journal.pone.0037372. Epub 2012 May 24.

Abstract

Modifying weights within a recurrent network to improve performance on a task has proven to be difficult. Echo-state networks in which modification is restricted to the weights of connections onto network outputs provide an easier alternative, but at the expense of modifying the typically sparse architecture of the network by including feedback from the output back into the network. We derive methods for using the values of the output weights from a trained echo-state network to set recurrent weights within the network. The result of this "transfer of learning" is a recurrent network that performs the task without requiring the output feedback present in the original network. We also discuss a hybrid version in which online learning is applied to both output and recurrent weights. Both approaches provide efficient ways of training recurrent networks to perform complex tasks. Through an analysis of the conditions required to make transfer of learning work, we define the concept of a "self-sensing" network state, and we compare and contrast this with compressed sensing.

摘要

在递归网络中修改权重以提高任务性能已被证明是困难的。其中修改仅限于连接到网络输出的权重的回声状态网络提供了一个更容易的替代方案,但以通过将来自输出的反馈包括回网络中来修改网络的典型稀疏结构为代价。我们导出了从训练有素的回声状态网络的输出权重值设置网络内递归权重的方法。这种“学习转移”的结果是一个递归网络,它执行任务而不需要原始网络中存在的输出反馈。我们还讨论了一种混合版本,其中在线学习应用于输出和递归权重。这两种方法都提供了训练递归网络以执行复杂任务的有效方法。通过分析使学习转移起作用所需的条件,我们定义了“自感知”网络状态的概念,并将其与压缩感知进行了比较和对比。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3ba/3360031/d3511b79596d/pone.0037372.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验