Del Giudice P, Fusi S, Badoni D, Dante V, Amit D J
Istituto Superiore di Sanità, Physics Laboratory, Rome, Italy.
Network. 1998 May;9(2):183-205. doi: 10.1088/0954-898x/9/2/003.
LANN27 is an electronic device implementing in discrete electronics a fully connected (full feedback) network of 27 neurons and 351 plastic synapses with stochastic Hebbian learning. Both neurons and synapses are dynamic elements, with two time constants--fast for neurons and slow for synapses. Learning, synaptic dynamics, is analogue and is driven in a Hebbian way by neural activities. Long-term memorization takes place on a discrete set of synaptic efficacies and is effected in a stochastic manner. The intense feedback between the nonlinear neural elements, via the learned synaptic structure, creates in an organic way a set of attractors for the collective retrieval dynamics of the neural system, akin to Hebbian learned reverberations. The resulting structure of the attractors is a record of the large-scale statistics in the uncontrolled, incoming flow of stimuli. As the statistics in the stimulus flow changes significantly, the attractors slowly follow it and the network behaves as a palimpsest--old is gradually replaced by new. Moreover, the slow learning creates attractors which render the network a prototype extractor: entire clouds of stimuli, noisy versions of a prototype, used in training, all retrieve the attractor corresponding to the prototype upon retrieval. Here we describe the process of studying the collective dynamics of the network, before, during and following learning, which is rendered complex by the richness of the possible stimulus streams and the large dimensionality of the space of states of the network. We propose sampling techniques and modes of representation for the outcome.
LANN27是一种电子设备,它在离散电子学中实现了一个由27个神经元和351个具有随机赫布学习的可塑性突触组成的全连接(全反馈)网络。神经元和突触都是动态元件,具有两个时间常数——神经元的时间常数快,突触的时间常数慢。学习,即突触动力学,是模拟的,并由神经活动以赫布方式驱动。长期记忆发生在一组离散的突触效能上,并以随机方式实现。非线性神经元件之间通过学习到的突触结构进行的强烈反馈,以一种有机的方式为神经系统的集体检索动力学创建了一组吸引子,类似于赫布学习的回响。吸引子的最终结构是在不受控制的传入刺激流中的大规模统计记录。随着刺激流中的统计数据发生显著变化,吸引子会缓慢跟随,网络就像一本重写本——旧的逐渐被新的取代。此外,缓慢的学习创建了吸引子,使网络成为一个原型提取器:在训练中使用的整个刺激云,即原型的噪声版本,在检索时都会检索到与原型对应的吸引子。在这里,我们描述了在学习之前、期间和之后研究网络集体动力学的过程,由于可能的刺激流丰富以及网络状态空间维度大,这个过程变得很复杂。我们提出了结果的采样技术和表示模式。