Golosio Bruno, Tiddia Gianmarco, De Luca Chiara, Pastorelli Elena, Simula Francesco, Paolucci Pier Stanislao
Department of Physics, University of Cagliari, Cagliari, Italy.
Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy.
Front Comput Neurosci. 2021 Feb 17;15:627620. doi: 10.3389/fncom.2021.627620. eCollection 2021.
Over the past decade there has been a growing interest in the development of parallel hardware systems for simulating large-scale networks of spiking neurons. Compared to other highly-parallel systems, GPU-accelerated solutions have the advantage of a relatively low cost and a great versatility, thanks also to the possibility of using the CUDA-C/C++ programming languages. NeuronGPU is a GPU library for large-scale simulations of spiking neural network models, written in the C++ and CUDA-C++ programming languages, based on a novel spike-delivery algorithm. This library includes simple LIF (leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx (adaptive-exponential-integrate-and-fire) neuron models with current or conductance based synapses, different types of spike generators, tools for recording spikes, state variables and parameters, and it supports user-definable models. The numerical solution of the differential equations of the dynamics of the AdEx models is performed through a parallel implementation, written in CUDA-C++, of the fifth-order Runge-Kutta method with adaptive step-size control. In this work we evaluate the performance of this library on the simulation of a cortical microcircuit model, based on LIF neurons and current-based synapses, and on balanced networks of excitatory and inhibitory neurons, using AdEx or Izhikevich neuron models and conductance-based or current-based synapses. On these models, we will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity. In particular, using a single NVIDIA GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model, which includes about 77,000 neurons and 3 · 10 connections, can be simulated at a speed very close to real time, while the simulation time of a balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron was about 70 s per second of biological activity.
在过去十年中,人们对开发用于模拟大规模脉冲神经元网络的并行硬件系统越来越感兴趣。与其他高度并行系统相比,GPU加速解决方案具有成本相对较低和通用性强的优势,这也要归功于使用CUDA-C/C++编程语言的可能性。NeuronGPU是一个用于大规模模拟脉冲神经网络模型的GPU库,用C++和CUDA-C++编程语言编写,基于一种新颖的脉冲传递算法。该库包括简单的LIF(漏电积分发放)神经元模型以及几个具有基于电流或电导的突触的多突触AdEx(自适应指数积分发放)神经元模型、不同类型的脉冲发生器、用于记录脉冲、状态变量和参数的工具,并且支持用户定义模型。AdEx模型动力学微分方程的数值解通过用CUDA-C++编写的具有自适应步长控制的五阶龙格-库塔方法的并行实现来执行。在这项工作中,我们基于LIF神经元和基于电流的突触,以及使用AdEx或Izhikevich神经元模型和基于电导或基于电流的突触的兴奋性和抑制性神经元的平衡网络,评估该库在模拟皮层微电路模型时的性能。在这些模型上,我们将表明,所提出的库在每秒生物活动模拟时间方面实现了一流的性能。特别是,使用单个NVIDIA GeForce RTX 2080 Ti GPU板,可以以非常接近实时的速度模拟包含约77,000个神经元和3·10个连接的全尺寸皮层微电路模型,而具有每个神经元1,000个连接的1,000,000个AdEx神经元的平衡网络的模拟时间约为每秒生物活动70秒。