Vogginger Bernhard, Schüffny René, Lansner Anders, Cederström Love, Partzsch Johannes, Höppner Sebastian
Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany.
Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology (KTH) Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden.
Front Neurosci. 2015 Jan 22;9:2. doi: 10.3389/fnins.2015.00002. eCollection 2015.
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
在神经模拟或神经形态硬件中实现突触可塑性通常资源消耗极大,常常需要在效率和灵活性之间做出妥协。贝叶斯置信传播神经网络(BCPNN)范式提供了一种通用但计算成本高昂的可塑性机制。基于贝叶斯统计,并与生物可塑性过程有明确联系,BCPNN学习规则已应用于许多领域,从数据分类、关联记忆、基于奖励的学习、概率推理到皮质吸引子记忆网络。在这种学习规则的基于脉冲的版本中,突触前、突触后和同时发生的活动在三个低通滤波阶段进行追踪,总共需要八个状态变量,其动态通常用固定步长的欧拉方法进行模拟。我们推导了解析解,允许对该学习规则进行高效的事件驱动实现。通过首先重写模型将每次更新的基本算术运算数量减少一半,以及其次对频繁计算的指数衰减使用查找表,进一步实现了加速。最终,在典型的使用案例中,使用我们的方法进行模拟比使用固定步长的欧拉方法快一个多数量级。为了使每个BCPNN突触占用较小的内存空间,我们还评估了状态变量使用定点数的情况,并评估了与传统显式欧拉方法相比实现相同或更好精度所需的位数。所有这些将允许在高性能计算中基于BCPNN对简化的皮质模型进行实时模拟。更重要的是,有了手头的解析解并且由于内存带宽的降低,该学习规则可以在专用或现有的数字神经形态硬件中高效实现。