Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.
Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.
Chaos. 2022 Jan;32(1):011101. doi: 10.1063/5.0075572.
Neural systems are well known for their ability to learn and store information as memories. Even more impressive is their ability to abstract these memories to create complex internal representations, enabling advanced functions such as the spatial manipulation of mental representations. While recurrent neural networks (RNNs) are capable of representing complex information, the exact mechanisms of how dynamical neural systems perform abstraction are still not well-understood, thereby hindering the development of more advanced functions. Here, we train a 1000-neuron RNN-a reservoir computer (RC)-to abstract a continuous dynamical attractor memory from isolated examples of dynamical attractor memories. Furthermore, we explain the abstraction mechanism with a new theory. By training the RC on isolated and shifted examples of either stable limit cycles or chaotic Lorenz attractors, the RC learns a continuum of attractors as quantified by an extra Lyapunov exponent equal to zero. We propose a theoretical mechanism of this abstraction by combining ideas from differentiable generalized synchronization and feedback dynamics. Our results quantify abstraction in simple neural systems, enabling us to design artificial RNNs for abstraction and leading us toward a neural basis of abstraction.
神经系统以其学习和存储信息作为记忆的能力而闻名。更令人印象深刻的是,它们能够抽象这些记忆,以创建复杂的内部表示,从而实现高级功能,例如心理表示的空间操作。虽然递归神经网络 (RNN) 能够表示复杂的信息,但动态神经网络执行抽象的精确机制仍未得到很好的理解,从而阻碍了更高级功能的发展。在这里,我们训练一个 1000 个神经元的 RNN——储层计算机 (RC)——从孤立的动态吸引子记忆示例中抽象出连续的动态吸引子记忆。此外,我们用一个新的理论来解释抽象机制。通过在稳定极限环或混沌 Lorenz 吸引子的孤立和移位示例上训练 RC,RC 学习了一系列吸引子,其特征是额外的 Lyapunov 指数等于零。我们通过结合可微广义同步和反馈动力学的思想提出了这种抽象的理论机制。我们的结果量化了简单神经网络中的抽象,使我们能够设计用于抽象的人工 RNN,并为抽象提供神经基础。