Medina Daniel, Leibold Christian
Department Biologie II, Ludwig-Maximilians-Universität München Munich, Germany ; Bernstein Center for Computational Neuroscience Munich Munich, Germany.
Front Synaptic Neurosci. 2014 Jun 10;6:13. doi: 10.3389/fnsyn.2014.00013. eCollection 2014.
Recurrent networks have been proposed as a model of associative memory. In such models, memory items are stored in the strength of connections between neurons. These modifiable connections or synapses constitute a shared resource among all stored memories, limiting the capacity of the network. Synaptic plasticity at different time scales can play an important role in optimizing the representation of associative memories, by keeping them sparse, uncorrelated and non-redundant. Here, we use a model of sequence memory to illustrate how plasticity allows a recurrent network to self-optimize by gradually re-encoding the representation of its memory items. A learning rule is used to sparsify large patterns, i.e., patterns with many active units. As a result, pattern sizes become more homogeneous, which increases the network's dynamical stability during sequence recall and allows more patterns to be stored. Last, we show that the learning rule allows for online learning in that it keeps the network in a robust dynamical steady state while storing new memories and overwriting old ones.
循环神经网络已被提出作为联想记忆的一种模型。在这类模型中,记忆项存储在神经元之间连接的强度中。这些可修改的连接或突触构成了所有存储记忆之间的共享资源,限制了网络的容量。不同时间尺度下的突触可塑性通过使联想记忆保持稀疏、不相关和非冗余,在优化其表征方面可以发挥重要作用。在这里,我们使用一个序列记忆模型来说明可塑性如何使循环神经网络通过逐步重新编码其记忆项的表征来进行自我优化。一种学习规则被用于使大模式(即具有许多活跃单元的模式)变得稀疏。结果,模式大小变得更加均匀,这增加了序列回忆期间网络的动态稳定性,并允许存储更多模式。最后,我们表明该学习规则允许在线学习,因为它在存储新记忆和覆盖旧记忆时使网络保持在一个稳健的动态稳态。