Senn Walter, Fusi Stefano
Department of Physiology, University of Bern, CH-3012 Bern, Switzerland.
Phys Rev E Stat Nonlin Soft Matter Phys. 2005 Jun;71(6 Pt 1):061907. doi: 10.1103/PhysRevE.71.061907. Epub 2005 Jun 16.
The efficacy of a biological synapse is naturally bounded, and at some resolution, and is discrete at the latest level of single vesicles. The finite number of synaptic states dramatically reduce the storage capacity of a network when online learning is considered (i.e., the synapses are immediately modified by each pattern): the trace of old memories decays exponentially with the number of new memories (palimpsest property). Moreover, finding the discrete synaptic strengths which enable the classification of linearly separable patterns is a combinatorially hard problem known to be NP complete. In this paper we show that learning with discrete (binary) synapses is nevertheless possible with high probability if a randomly selected fraction of synapses is modified following each stimulus presentation (slow stochastic learning). As an additional constraint, the synapses are only changed if the output neuron does not give the desired response, as in the case of classical perceptron learning. We prove that for linearly separable classes of patterns the stochastic learning algorithm converges with arbitrary high probability in a finite number of presentations, provided that the number of neurons encoding the patterns is large enough. The stochastic learning algorithm is successfully applied to a standard classification problem of nonlinearly separable patterns by using multiple, stochastically independent output units, with an achieved performance which is comparable to the maximal ones reached for the task.
生物突触的效能自然是有界的,在某种分辨率下,并且在单个囊泡的最新层面上是离散的。当考虑在线学习时(即,突触会被每个模式立即修改),有限数量的突触状态会显著降低网络的存储容量:旧记忆的痕迹会随着新记忆的数量呈指数衰减(重写本属性)。此外,找到能够对线性可分模式进行分类的离散突触强度是一个已知为NP完全的组合难题。在本文中,我们表明,如果在每次刺激呈现后修改随机选择的一部分突触(缓慢随机学习),那么使用离散(二进制)突触进行学习仍然很有可能。作为一个额外的约束条件,如同经典感知器学习的情况一样,只有当输出神经元没有给出期望的响应时,突触才会改变。我们证明,对于线性可分的模式类别,只要编码这些模式的神经元数量足够大,随机学习算法在有限次数的呈现中以任意高的概率收敛。通过使用多个随机独立的输出单元,随机学习算法成功地应用于非线性可分模式的标准分类问题,所达到的性能与该任务所能达到的最佳性能相当。