Romani Sandro, Amit Daniel J, Amit Yali
Human Physiology, Università di Roma La Sapienza, Rome 00185, Italy.
Neural Comput. 2008 Aug;20(8):1928-50. doi: 10.1162/neco.2008.10-07-618.
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.
一个采用保守版赫布学习规则训练的兴奋性突触网络被用作模型,用于识别数千个曾经见过的刺激与从未见过的刺激之间的熟悉程度。此类网络最初是为模拟记忆检索(选择性延迟活动)而提出的。我们表明,相同的框架允许同时纳入熟悉度识别和记忆检索,并估计网络的容量。对于二元神经元,我们扩展了阿米特和富西(1994年)的分析,以基于对学习信号的选择性神经元和非选择性神经元之间场差的信噪比计算来获得容量限制。我们表明,在快速学习(增强概率约为1)的情况下,最近学习的模式可以在工作记忆中检索到(选择性延迟活动)。在存在外部场的情况下,大量曾经见过的学习模式会引发一个现实的熟悉度信号。当增强概率远小于1(慢学习)时,记忆检索消失,而熟悉度识别能力则保持在类似的高水平。这一分析在模拟中得到了证实。对于模拟神经元,由于此类分析更为困难,我们通过研究高于稳态分布的增强突触的多余数量来简化容量分析。在此框架下,我们推导了增强概率和抑制概率之间的最优约束,以最大化容量。