Rădulescu Anca, Cox Kingsley, Adams Paul
University of Colorado, UCB 526 Boulder, CO 80309-0526, USA.
J Theor Biol. 2009 Jun 21;258(4):489-501. doi: 10.1016/j.jtbi.2009.01.036. Epub 2009 Feb 25.
Recent work on long term potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We previously suggested that such errors in Hebbian learning might be analogous to mutations in evolution.
We examine this proposal quantitatively, extending the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity. We introduce an error matrix E, which expresses possible crosstalk between updating at different connections. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. In the most biologically plausible case when there are no intrinsically privileged connections, E has diagonal elements Q and off-diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n and with a synaptic parameter b that reflects synapse density, calcium diffusion, etc. We study the dependence of the learning accuracy on b, n and the amount of input activity or correlation (analytically and computationally). We find that accuracy increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e., biologically realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q=1/n.
We discuss the relation of our results to Hebbian unsupervised learning in the brain. When the mechanism lacks specificity, the network fails to learn the expected, and typically most useful, result, especially when the input correlation is weak. Hebbian crosstalk would reflect the very high density of synapses along dendrites, and inevitably degrades learning.
近期关于脑片长期增强效应的研究表明,赫布法则并非完全具有突触特异性,这可能是由于钙或其他因素在突触间的扩散所致。我们之前曾提出,这种赫布学习中的误差可能类似于进化中的突变。
我们对这一观点进行了定量研究,将单个线性神经元的经典奥贾无监督学习模型进行扩展,以纳入赫布非特异性。我们引入了一个误差矩阵E,它表示不同连接更新之间可能存在的串扰。当不存在非特异性时,这会得出收敛到输入分布的第一主成分(PC1)的经典结果。我们表明,修改后的算法收敛到矩阵EC的主导特征向量,其中C是输入协方差矩阵。在最符合生物学原理的情况下,即不存在内在特权连接时,E的对角元素为Q,非对角元素为(1 - Q)/(n - 1),其中Q(质量)预计会随着输入数量n以及反映突触密度、钙扩散等的突触参数b的增加而降低。我们研究了学习准确性对b、n以及输入活动或相关性的依赖关系(通过解析和计算方法)。我们发现,随着b的增加,准确性会提高(学习逐渐变得不那么有用),特别是对于中等(即生物学上现实的)相关强度而言,尽管在达到平凡极限Q = 1/n之前总会发生一些有用的学习。
我们讨论了我们的结果与大脑中赫布无监督学习的关系。当机制缺乏特异性时,网络无法学习到预期的、通常也是最有用的结果,尤其是当输入相关性较弱时。赫布串扰将反映沿树突的突触的极高密度,并且不可避免地会降低学习效果。