Honda Research Institute Europe GmbH, D-63073 Offenbach, Germany.
Neural Comput. 2010 Feb;22(2):289-341. doi: 10.1162/neco.2009.08-07-588.
Neural associative networks with plastic synapses have been proposed as computational models of brain functions and also for applications such as pattern recognition and information retrieval. To guide biological models and optimize technical applications, several definitions of memory capacity have been used to measure the efficiency of associative memory. Here we explain why the currently used performance measures bias the comparison between models and cannot serve as a theoretical benchmark. We introduce fair measures for information-theoretic capacity in associative memory that also provide a theoretical benchmark. In neural networks, two types of manipulating synapses can be discerned: synaptic plasticity, the change in strength of existing synapses, and structural plasticity, the creation and pruning of synapses. One of the new types of memory capacity we introduce permits quantifying how structural plasticity can increase the network efficiency by compressing the network structure, for example, by pruning unused synapses. Specifically, we analyze operating regimes in the Willshaw model in which structural plasticity can compress the network structure and push performance to the theoretical benchmark. The amount C of information stored in each synapse can scale with the logarithm of the network size rather than being constant, as in classical Willshaw and Hopfield nets (< or = ln 2 approximately 0.7). Further, the review contains novel technical material: a capacity analysis of the Willshaw model that rigorously controls for the level of retrieval quality, an analysis for memories with a nonconstant number of active units (where C < or = 1/e ln 2 approximately 0.53), and the analysis of the computational complexity of associative memories with and without network compression.
具有可塑突触的神经联想网络被提议作为大脑功能的计算模型,也可用于模式识别和信息检索等应用。为了指导生物模型并优化技术应用,已经使用了几种记忆容量的定义来衡量联想记忆的效率。在这里,我们解释为什么当前使用的性能度量会对模型之间的比较产生偏差,并且不能作为理论基准。我们引入了联想记忆中的信息论容量的公平度量,这些度量也提供了理论基准。在神经网络中,可以区分两种类型的突触操作:突触可塑性,即现有突触强度的变化,以及结构可塑性,即突触的创建和修剪。我们引入的新类型的记忆容量之一,可以量化结构可塑性如何通过压缩网络结构(例如,通过修剪未使用的突触)来提高网络效率。具体来说,我们分析了 Willshaw 模型中的工作模式,在这些模式中,结构可塑性可以压缩网络结构,并将性能推向理论基准。每个突触存储的信息量 C 可以与网络大小的对数成正比,而不是像经典的 Willshaw 和 Hopfield 网络那样是常数(<或=ln2 约为 0.7)。此外,这篇综述还包含了新的技术材料:对 Willshaw 模型的容量分析,该分析严格控制了检索质量的水平,对具有非恒定数量活动单元的记忆的分析(其中 C <或=1/e ln2 约为 0.53),以及对具有和不具有网络压缩的联想记忆的计算复杂性的分析。