Suppr超能文献

基于高阶相关性的赫布学习需要将串扰最小化。

Hebbian learning from higher-order correlations requires crosstalk minimization.

作者信息

Cox K J A, Adams P R

机构信息

Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY , 11794-5230, USA,

出版信息

Biol Cybern. 2014 Aug;108(4):405-22. doi: 10.1007/s00422-014-0608-4. Epub 2014 May 27.

Abstract

Activity-dependent synaptic plasticity should be extremely connection specific, though experiments have shown it is not, and biophysics suggests it cannot be. Extreme specificity (near-zero "crosstalk") might be essential for unsupervised learning from higher-order correlations, especially when a neuron has many inputs. It is well known that a normalized nonlinear Hebbian rule can learn "unmixing" weights from inputs generated by linearly combining independently fluctuating nonGaussian sources using an orthogonal mixing matrix. We previously reported that even if the matrix is only approximately orthogonal, a nonlinear-specific Hebbian rule can usually learn almost correct unmixing weights (Cox and Adams in Front Comput Neurosci 3: doi: 10.3389/neuro.10.011.2009 2009). We also reported simulations that showed that as crosstalk increases from zero, the learned weight vector first moves slightly away from the crosstalk-free direction and then, at a sharp threshold level of inspecificity, jumps to a completely incorrect direction. Here, we report further numerical experiments that show that above this threshold, residual learning is driven instead almost entirely by second-order input correlations, as occurs using purely Gaussian sources or a linear rule, and any amount of crosstalk. Thus, in this "ICA" model learning from higher-order correlations, required for unmixing, requires high specificity. We compare our results with a recent mathematical analysis of the effect of crosstalk for exactly orthogonal mixing, which revealed that a second, even lower, threshold, exists below which successful learning is impossible unless weights happen to start close to the correct direction. Our simulations show that this also holds when the mixing is not exactly orthogonal. These results suggest that if the brain uses simple Hebbian learning, it must operate with extraordinarily accurate synaptic plasticity to ensure powerful high-dimensional learning. Synaptic crowding would preclude this when inputs are numerous, and we propose that the neocortex might be distinguished by special circuitry that promotes extreme specificity for high-dimensional nonlinear learning.

摘要

依赖活动的突触可塑性应该具有极高的连接特异性,然而实验表明并非如此,并且生物物理学表明它也不可能如此。极端特异性(近乎零的“串扰”)对于从高阶相关性进行无监督学习可能至关重要,特别是当一个神经元有许多输入时。众所周知,归一化非线性赫布规则可以从使用正交混合矩阵线性组合独立波动的非高斯源生成的输入中学习“解混”权重。我们之前报道过,即使矩阵只是近似正交,非线性特定的赫布规则通常也能学习到几乎正确的解混权重(考克斯和亚当斯,《神经科学前沿计算》3:doi: 10.3389/neuro.10.011.2009,2009年)。我们还报道了模拟结果,表明随着串扰从零增加,学习到的权重向量首先会稍微偏离无串扰方向,然后在一个尖锐的非特异性阈值水平上,跳到一个完全错误的方向。在这里,我们报告了进一步的数值实验,表明在这个阈值之上,残余学习几乎完全由二阶输入相关性驱动,就像使用纯高斯源或线性规则以及任何程度的串扰时那样。因此,在这种用于解混的从高阶相关性进行“独立成分分析”(ICA)模型学习中,需要高特异性。我们将我们的结果与最近对精确正交混合的串扰效应的数学分析进行了比较,该分析揭示了存在第二个甚至更低的阈值,低于该阈值,除非权重碰巧从接近正确方向开始,否则成功学习是不可能的。我们的模拟表明,当混合不完全正交时也是如此。这些结果表明,如果大脑使用简单的赫布学习,它必须以极其精确的突触可塑性来运作,以确保强大的高维学习。当输入众多时,突触拥挤会排除这种情况,并且我们提出新皮层可能通过促进高维非线性学习的极端特异性的特殊电路来区分。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验