Elliott Terry
Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.
Neural Comput. 2014 Sep;26(9):1924-72. doi: 10.1162/NECO_a_00630. Epub 2014 Jun 12.
A recent model of intrinsic plasticity coupled to Hebbian synaptic plasticity proposes that adaptation of a neuron's threshold and gain in a sigmoidal response function to achieve a sparse, exponential output firing rate distribution facilitates the discovery of heavy-tailed or super- gaussian sources in the neuron's inputs. We show that the exponential output distribution is irrelevant to these dynamics and that, furthermore, while sparseness is sufficient, it is not necessary. The intrinsic plasticity mechanism drives the neuron's threshold large and positive, and we prove that in such a regime, the neuron will find supergaussian sources; equally, however, if the threshold is large and negative (an antisparse regime), it will also find supergaussian sources. Away from such extremes, the neuron can also discover subgaussian sources. By examining a neuron with a fixed sigmoidal nonlinearity and considering the synaptic strength fixed-point structure in the two-dimensional parameter space defined by the neuron's threshold and gain, we show that this space is carved up into sub- and supergaussian-input-finding regimes, possibly with regimes of simultaneous stability of sub- and supergaussian sources or regimes of instability of all sources; a single gaussian source may also be stabilized by the presence of a nongaussian source. A neuron's operating point (essentially its threshold and gain coupled with its input statistics) therefore critically determines its computational repertoire. Intrinsic plasticity mechanisms induce trajectories in this parameter space but do not fundamentally modify it. Unless the trajectories cross critical boundaries in this space, intrinsic plasticity is irrelevant and the neuron's nonlinearity may be frozen with identical receptive field refinement dynamics.
最近一个将内在可塑性与赫布突触可塑性相结合的模型提出,神经元阈值和增益在S型响应函数中的适应性调整,以实现稀疏的指数输出放电率分布,有助于在神经元输入中发现重尾或超高斯源。我们表明,指数输出分布与这些动力学无关,而且,虽然稀疏性是充分的,但不是必要的。内在可塑性机制会使神经元的阈值变得很大且为正,并且我们证明在这种情况下,神经元会找到超高斯源;同样地,然而,如果阈值很大且为负(一种反稀疏情况),它也会找到超高斯源。在远离这种极端情况时,神经元也能发现亚高斯源。通过研究具有固定S型非线性的神经元,并考虑由神经元阈值和增益定义的二维参数空间中的突触强度不动点结构,我们表明这个空间被划分为亚高斯输入发现区域和超高斯输入发现区域,可能存在亚高斯源和超高斯源同时稳定的区域或所有源都不稳定的区域;一个非高斯源的存在也可能使单个高斯源稳定。因此,神经元的工作点(本质上是其阈值和增益以及其输入统计量)关键地决定了它的计算能力。内在可塑性机制在这个参数空间中诱导轨迹,但不会从根本上改变它。除非轨迹穿过这个空间中的临界边界,否则内在可塑性是无关紧要的,并且神经元的非线性可能会因相同的感受野细化动力学而被冻结。