• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用储层计算机学习连续混沌吸引子。

Learning continuous chaotic attractors with a reservoir computer.

机构信息

Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.

Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.

出版信息

Chaos. 2022 Jan;32(1):011101. doi: 10.1063/5.0075572.

DOI:10.1063/5.0075572
PMID:35105129
Abstract

Neural systems are well known for their ability to learn and store information as memories. Even more impressive is their ability to abstract these memories to create complex internal representations, enabling advanced functions such as the spatial manipulation of mental representations. While recurrent neural networks (RNNs) are capable of representing complex information, the exact mechanisms of how dynamical neural systems perform abstraction are still not well-understood, thereby hindering the development of more advanced functions. Here, we train a 1000-neuron RNN-a reservoir computer (RC)-to abstract a continuous dynamical attractor memory from isolated examples of dynamical attractor memories. Furthermore, we explain the abstraction mechanism with a new theory. By training the RC on isolated and shifted examples of either stable limit cycles or chaotic Lorenz attractors, the RC learns a continuum of attractors as quantified by an extra Lyapunov exponent equal to zero. We propose a theoretical mechanism of this abstraction by combining ideas from differentiable generalized synchronization and feedback dynamics. Our results quantify abstraction in simple neural systems, enabling us to design artificial RNNs for abstraction and leading us toward a neural basis of abstraction.

摘要

神经系统以其学习和存储信息作为记忆的能力而闻名。更令人印象深刻的是,它们能够抽象这些记忆,以创建复杂的内部表示,从而实现高级功能,例如心理表示的空间操作。虽然递归神经网络 (RNN) 能够表示复杂的信息,但动态神经网络执行抽象的精确机制仍未得到很好的理解,从而阻碍了更高级功能的发展。在这里,我们训练一个 1000 个神经元的 RNN——储层计算机 (RC)——从孤立的动态吸引子记忆示例中抽象出连续的动态吸引子记忆。此外,我们用一个新的理论来解释抽象机制。通过在稳定极限环或混沌 Lorenz 吸引子的孤立和移位示例上训练 RC,RC 学习了一系列吸引子,其特征是额外的 Lyapunov 指数等于零。我们通过结合可微广义同步和反馈动力学的思想提出了这种抽象的理论机制。我们的结果量化了简单神经网络中的抽象,使我们能够设计用于抽象的人工 RNN,并为抽象提供神经基础。

相似文献

1
Learning continuous chaotic attractors with a reservoir computer.利用储层计算机学习连续混沌吸引子。
Chaos. 2022 Jan;32(1):011101. doi: 10.1063/5.0075572.
2
[Dynamic paradigm in psychopathology: "chaos theory", from physics to psychiatry].[精神病理学中的动态范式:“混沌理论”,从物理学到精神病学]
Encephale. 2001 May-Jun;27(3):260-8.
3
Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography.利用储层计算机学习混沌吸引子及其在混沌同步和密码学中的应用。
Phys Rev E. 2018 Jul;98(1-1):012215. doi: 10.1103/PhysRevE.98.012215.
4
Attractor reconstruction with reservoir computers: The effect of the reservoir's conditional Lyapunov exponents on faithful attractor reconstruction.基于回声状态网络的吸引子重构:储层条件李雅普诺夫指数对忠实吸引子重构的影响
Chaos. 2024 Apr 1;34(4). doi: 10.1063/5.0196257.
5
Representations of continuous attractors of recurrent neural networks.循环神经网络连续吸引子的表示。
IEEE Trans Neural Netw. 2009 Feb;20(2):368-72. doi: 10.1109/TNN.2008.2010771. Epub 2009 Jan 13.
6
Continuous attractors of Lotka-Volterra recurrent neural networks with infinite neurons.具有无限神经元的Lotka-Volterra递归神经网络的连续吸引子
IEEE Trans Neural Netw. 2010 Oct;21(10):1690-5. doi: 10.1109/TNN.2010.2067224. Epub 2010 Sep 2.
7
Backpropagation algorithms and Reservoir Computing in Recurrent Neural Networks for the forecasting of complex spatiotemporal dynamics.反向传播算法和递归神经网络中的 Reservoir Computing 在复杂时空动力学预测中的应用。
Neural Netw. 2020 Jun;126:191-217. doi: 10.1016/j.neunet.2020.02.016. Epub 2020 Mar 21.
8
Learning long-term motor timing/patterns on an orthogonal basis in random neural networks.在随机神经网络中基于正交基学习长期运动定时/模式。
Neural Netw. 2023 Jun;163:298-311. doi: 10.1016/j.neunet.2023.04.006. Epub 2023 Apr 12.
9
Learning sequence attractors in recurrent networks with hidden neurons.具有隐藏神经元的递归网络中的学习序列吸引子。
Neural Netw. 2024 Oct;178:106466. doi: 10.1016/j.neunet.2024.106466. Epub 2024 Jun 22.
10
Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation.兴奋性抑制性递归神经网络中的刺激驱动和自发动力学用于序列表示。
Neural Comput. 2021 Sep 16;33(10):2603-2645. doi: 10.1162/neco_a_01418.

引用本文的文献

1
Shaping dynamical neural computations using spatiotemporal constraints.利用时空约束塑造动态神经计算。
Biochem Biophys Res Commun. 2024 Oct 8;728:150302. doi: 10.1016/j.bbrc.2024.150302. Epub 2024 Jun 25.
2
Shaping dynamical neural computations using spatiotemporal constraints.利用时空约束塑造动态神经计算。
ArXiv. 2023 Nov 27:arXiv:2311.15572v1.