Senn Walter, Fusi Stefano
Department of Physiology, University of Bern, CH-30 Bern, Switzerland.
Neural Comput. 2005 Oct;17(10):2106-38. doi: 10.1162/0899766054615644.
Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information.
在神经网络中的学习通常被认为是由单个刺激引起的突触修饰的线性叠加。然而,由于生物突触具有自然的界限,线性叠加会导致对先前获得的记忆的快速遗忘。在这里,我们表明可以通过对突触和神经动力学引入额外的约束来避免这种遗忘。我们考虑兴奋性突触的赫布可塑性。只有当突触后反应与期望输出不匹配时,突触才会被修饰。使用这种学习规则,只要(1)存在一些全局抑制,(2)学习率很小,并且(3)神经元能够区分总突触输入中的微小差异(例如,通过使神经元阈值与总突触后输入相比很小),就可以恢复具有无界权重的原始记忆性能。我们以广义感知器收敛定理的形式证明,在这些约束下,神经元学会对任何线性可分的模式集进行分类,包括一类广泛的高度相关的随机模式。在学习过程中,兴奋通过抑制大致达到平衡,并且神经元基于围绕这种平衡的微小差异对模式进行分类。突触饱和这一事实还有额外的好处,即非线性可分模式,例如具有矛盾输出的相似模式,最终会产生阈下反应,因此会使无法提供任何信息的神经元沉默。