• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

训练反馈网络全局稳定性的几何分析。

A Geometrical Analysis of Global Stability in Trained Feedback Networks.

机构信息

Laboratoire de Neurosciences Cognitives et Computationelles, INSERM U960, and Laboratoire de Physique Statistique, CNRS UMR 8550, Ecole Normale Supérieure-PSL Research University, Paris 75005, France

Laboratoire de Neurosciences Cognitives et Computationelles, INSERM U960, Ecole Normale Supérieure-PSL Research University, Paris 75005, France

出版信息

Neural Comput. 2019 Jun;31(6):1139-1182. doi: 10.1162/neco_a_01187. Epub 2019 Apr 12.

DOI:10.1162/neco_a_01187
PMID:30979353
Abstract

Recurrent neural networks have been extensively studied in the context of neuroscience and machine learning due to their ability to implement complex computations. While substantial progress in designing effective learning algorithms has been achieved, a full understanding of trained recurrent networks is still lacking. Specifically, the mechanisms that allow computations to emerge from the underlying recurrent dynamics are largely unknown. Here we focus on a simple yet underexplored computational setup: a feedback architecture trained to associate a stationary output to a stationary input. As a starting point, we derive an approximate analytical description of global dynamics in trained networks, which assumes uncorrelated connectivity weights in the feedback and in the random bulk. The resulting mean-field theory suggests that the task admits several classes of solutions, which imply different stability properties. Different classes are characterized in terms of the geometrical arrangement of the readout with respect to the input vectors, defined in the high-dimensional space spanned by the network population. We find that such an approximate theoretical approach can be used to understand how standard training techniques implement the input-output task in finite-size feedback networks. In particular, our simplified description captures the local and the global stability properties of the target solution, and thus predicts training performance.

摘要

递归神经网络在神经科学和机器学习领域得到了广泛的研究,因为它们能够实现复杂的计算。尽管在设计有效的学习算法方面取得了实质性的进展,但对经过训练的递归网络的全面理解仍有待提高。具体来说,允许计算从底层递归动力学中涌现的机制在很大程度上是未知的。在这里,我们关注一个简单但尚未充分探索的计算设置:一个经过训练的反馈架构,用于将固定的输出与固定的输入关联起来。作为一个起点,我们推导出了一个关于训练网络全局动力学的近似解析描述,它假设反馈和随机体中的连接权重是不相关的。所得的平均场理论表明,该任务有几个类别的解决方案,这些解决方案意味着不同的稳定性属性。不同的类别是根据读取相对于输入向量的几何排列来定义的,这些输入向量在网络群体所占据的高维空间中定义。我们发现,这种近似的理论方法可以用来理解标准的训练技术如何在有限大小的反馈网络中实现输入-输出任务。具体来说,我们的简化描述捕捉到了目标解的局部和全局稳定性特性,从而预测了训练性能。

相似文献

1
A Geometrical Analysis of Global Stability in Trained Feedback Networks.训练反馈网络全局稳定性的几何分析。
Neural Comput. 2019 Jun;31(6):1139-1182. doi: 10.1162/neco_a_01187. Epub 2019 Apr 12.
2
Transferring learning from external to internal weights in echo-state networks with sparse connectivity.从具有稀疏连接的回声状态网络的外部权重转移到内部权重。
PLoS One. 2012;7(5):e37372. doi: 10.1371/journal.pone.0037372. Epub 2012 May 24.
3
Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks.在低秩递归神经网络中连接连通性、动态和计算。
Neuron. 2018 Aug 8;99(3):609-623.e29. doi: 10.1016/j.neuron.2018.07.003. Epub 2018 Jul 26.
4
Differential Geometry Methods for Constructing Manifold-Targeted Recurrent Neural Networks.用于构建流形目标递归神经网络的微分几何方法。
Neural Comput. 2022 Jul 14;34(8):1790-1811. doi: 10.1162/neco_a_01511.
5
Local online learning in recurrent networks with random feedback.具有随机反馈的递归网络中的局部在线学习。
Elife. 2019 May 24;8:e43299. doi: 10.7554/eLife.43299.
6
Local Dynamics in Trained Recurrent Neural Networks.训练后的递归神经网络中的局部动力学
Phys Rev Lett. 2017 Jun 23;118(25):258101. doi: 10.1103/PhysRevLett.118.258101.
7
Biologically plausible deep learning - But how far can we go with shallow networks?生物学上合理的深度学习——但我们可以在浅层网络中走多远?
Neural Netw. 2019 Oct;118:90-101. doi: 10.1016/j.neunet.2019.06.001. Epub 2019 Jun 20.
8
Design of double fuzzy clustering-driven context neural networks.双模糊聚类驱动的上下文神经网络设计。
Neural Netw. 2018 Aug;104:1-14. doi: 10.1016/j.neunet.2018.03.018. Epub 2018 Apr 9.
9
full-FORCE: A target-based method for training recurrent networks.全强制:一种用于训练循环网络的基于目标的方法。
PLoS One. 2018 Feb 7;13(2):e0191527. doi: 10.1371/journal.pone.0191527. eCollection 2018.
10
Contrastive Hebbian learning with random feedback weights.对比随机反馈权重的Hebbian 学习。
Neural Netw. 2019 Jun;114:1-14. doi: 10.1016/j.neunet.2019.01.008. Epub 2019 Feb 21.

引用本文的文献

1
Evolution of neural activity in circuits bridging sensory and abstract knowledge.神经活动在连接感觉和抽象知识的回路中的演变。
Elife. 2023 Mar 7;12:e79908. doi: 10.7554/eLife.79908.
2
Thalamic control of cortical dynamics in a model of flexible motor sequencing.丘脑对灵活运动序列模型中皮质动态的控制。
Cell Rep. 2021 Jun 1;35(9):109090. doi: 10.1016/j.celrep.2021.109090.
3
Universality and individuality in neural dynamics across large populations of recurrent networks.循环神经网络大群体中神经动力学的普遍性与个体性
Adv Neural Inf Process Syst. 2019 Dec;2019:15629-15641.