Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingts Paris, France.
Advanced Processors Technology Group, School of Computer Science, University of Manchester Manchester, UK.
Front Neurosci. 2015 Jan 20;8:429. doi: 10.3389/fnins.2014.00429. eCollection 2014.
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.
突触可塑性的确切生物学机制仍难以捉摸,但神经网络的模拟大大增强了我们对特定全局功能如何从神经元的大规模并行计算和局部赫布或尖峰时间依赖可塑性规则产生的理解。为了模拟大量的神经组织,这就需要在专用硬件平台上对大规模的可塑性神经网络进行大规模模拟,因为突触传递和更新与当前架构支持的计算风格严重不匹配。由于生物可塑性现象的多样性和相应的模型多样性,在承诺使用一种硬件实现之前,非常有必要对各种可塑性假设进行测试。在这里,我们提出了一种在 SpiNNaker 分布式数字神经模拟平台上研究不同可塑性方法的新框架。所提出的体系结构的关键创新是利用 SpiNNaker 内部的 ARM 处理器的可重构性,专门为其子集分配专门用于处理突触可塑性更新的处理器,而其余部分则执行通常的神经和突触模拟。通过展示各种基于尖峰和基于速率的学习规则的实现,包括标准的尖峰时间依赖可塑性(STDP)、电压依赖性 STDP 和基于速率的 BCM 规则,我们证明了所提出的方法的灵活性。我们在 4 个 SpiNNaker 板上实时运行经典学习实验来分析它们的性能并对其进行验证。结果是一个高效、模块化、灵活和可扩展的框架,为在并行和可重构的 SpiNNaker 系统上快速轻松地探索非常不同类型的学习模型提供了有价值的工具。