Kong Ling-Wei, Brewer Gene A, Lai Ying-Cheng
Department of Computational Biology, Cornell University, Ithaca, New York, USA.
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona, USA.
Nat Commun. 2024 Jun 6;15(1):4840. doi: 10.1038/s41467-024-49190-4.
Traditional neural network models of associative memories were used to store and retrieve static patterns. We develop reservoir-computing based memories for complex dynamical attractors, under two common recalling scenarios in neuropsychology: location-addressable with an index channel and content-addressable without such a channel. We demonstrate that, for location-addressable retrieval, a single reservoir computing machine can memorize a large number of periodic and chaotic attractors, each retrievable with a specific index value. We articulate control strategies to achieve successful switching among the attractors, unveil the mechanism behind failed switching, and uncover various scaling behaviors between the number of stored attractors and the reservoir network size. For content-addressable retrieval, we exploit multistability with cue signals, where the stored attractors coexist in the high-dimensional phase space of the reservoir network. As the length of the cue signal increases through a critical value, a high success rate can be achieved. The work provides foundational insights into developing long-term memories and itinerancy for complex dynamical patterns.
传统的联想记忆神经网络模型用于存储和检索静态模式。我们在神经心理学的两种常见回忆场景下,开发了基于储层计算的复杂动态吸引子记忆:一种是通过索引通道进行位置可寻址,另一种是没有这种通道的内容可寻址。我们证明,对于位置可寻址检索,单个储层计算机可以存储大量周期性和混沌吸引子,每个吸引子都可以通过特定的索引值进行检索。我们阐述了实现吸引子之间成功切换的控制策略,揭示了切换失败背后的机制,并发现了存储吸引子数量与储层网络大小之间的各种缩放行为。对于内容可寻址检索,我们利用线索信号的多重稳定性,其中存储的吸引子共存于储层网络的高维相空间中。随着线索信号长度增加到一个临界值,可以实现较高的成功率。这项工作为开发复杂动态模式的长期记忆和遍历性提供了基础见解。