Suppr超能文献

用于潜在吸引子计算的网络容量分析。

Network capacity analysis for latent attractor computation.

作者信息

Doboli Simona, Minai Ali A

机构信息

Computer Science Department, Hofstra University, Hempstead, NY 11549, USA.

出版信息

Network. 2003 May;14(2):273-302.

Abstract

Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.

摘要

吸引子网络一直是神经计算中最成功的范例之一,并被用作神经系统中的计算模型。最近,我们提出了一种名为“潜在吸引子”的范例,其中通过赫布学习嵌入循环网络中的吸引子用于引导网络对外部输入的响应,而不是使其自身显现出来。这使得网络能够在复杂情况下生成上下文敏感的内部代码。潜在吸引子在解释海马体(对记忆和空间学习具有根本重要性的脑区)内的计算方面特别有帮助。潜在吸引子网络是联想记忆网络的一种特殊情况。这里研究的模型由一个两层循环网络组成,吸引子使用截断赫布学习规则存储在循环连接中。两层中的放电都是竞争性的——K个胜者全得放电。允许放电的神经元数量K小于存储吸引子的活跃集的大小。潜在吸引子网络的性能取决于网络能够维持的此类吸引子的数量。在本文中,我们使用为标准联想记忆网络开发的信噪比方法对潜在吸引子网络的容量和动力学进行理论和计算分析。这是使潜在吸引子成为神经计算工具库中可行工具的重要第一步。这里开发的方法得出了潜在吸引子网络容量极限和动力学的数值估计。该技术代表了一种分析具有竞争性放电的标准联想记忆网络的通用方法。理论分析基于使用高斯近似对树突总和分布的估计。由于竞争性放电特性,容量结果仅通过迭代计算错误放电的概率进行数值估计。分析包括两种情况:简单情况分析考虑了由于共享模式导致的权重之间的相关性,详细情况分析还包括网络当前状态与先前状态之间的时间相关性。后一种情况对于非零初始虚假放电能更好地预测网络状态的动力学。理论分析还显示了模型的主要参数对存储容量的影响。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验