Suppr超能文献

吸引子网络中的记忆动力学与显著权重。

Memory dynamics in attractor networks with saliency weights.

机构信息

Institute for Infocomm Research, Agency for Science Technology and Research, Singapore 138632.

出版信息

Neural Comput. 2010 Jul;22(7):1899-926. doi: 10.1162/neco.2010.07-09-1050.

Abstract

Memory is a fundamental part of computational systems like the human brain. Theoretical models identify memories as attractors of neural network activity patterns based on the theory that attractor (recurrent) neural networks are able to capture some crucial characteristics of memory, such as encoding, storage, retrieval, and long-term and working memory. In such networks, long-term storage of the memory patterns is enabled by synaptic strengths that are adjusted according to some activity-dependent plasticity mechanisms (of which the most widely recognized is the Hebbian rule) such that the attractors of the network dynamics represent the stored memories. Most of previous studies on associative memory are focused on Hopfield-like binary networks, and the learned patterns are often assumed to be uncorrelated in a way that minimal interactions between memories are facilitated. In this letter, we restrict our attention to a more biological plausible attractor network model and study the neuronal representations of correlated patterns. We have examined the role of saliency weights in memory dynamics. Our results demonstrate that the retrieval process of the memorized patterns is characterized by the saliency distribution, which affects the landscape of the attractors. We have established the conditions that the network state converges to unique memory and multiple memories. The analytical result also holds for other cases for variable coding levels and nonbinary levels, indicating a general property emerging from correlated memories. Our results confirmed the advantage of computing with graded-response neurons over binary neurons (i.e., reducing of spurious states). It was also found that the nonuniform saliency distribution can contribute to disappearance of spurious states when they exit.

摘要

记忆是像人类大脑这样的计算系统的基本组成部分。理论模型将记忆识别为神经网络活动模式的吸引子,基于吸引子(递归)神经网络能够捕获记忆的一些关键特征的理论,例如编码、存储、检索以及长期和工作记忆。在这样的网络中,通过根据一些基于活动的可塑性机制(其中最广泛认可的是赫布规则)调整突触强度来实现记忆模式的长期存储,使得网络动力学的吸引子代表存储的记忆。以前关于联想记忆的大多数研究都集中在类似于 Hopfield 的二进制网络上,并且通常假设所学习的模式以一种促进记忆之间最小相互作用的方式是不相关的。在这封信中,我们将注意力限制在更具生物学合理性的吸引子网络模型上,并研究相关模式的神经元表示。我们研究了突出权重在记忆动力学中的作用。我们的结果表明,记忆模式的检索过程由突出分布特征化,突出分布会影响吸引子的景观。我们已经确定了网络状态收敛到唯一记忆和多个记忆的条件。分析结果也适用于其他可变编码级别和非二进制级别的情况,表明了相关记忆中出现的一般特性。我们的结果证实了使用渐变响应神经元计算比使用二进制神经元(即减少虚假状态)的优势。还发现,当虚假状态退出时,非均匀突出分布可以有助于它们的消失。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验