National Research Center "Kurchatov Institute", Moscow, Russia.
National Research Center "Kurchatov Institute", Moscow, Russia.
Neural Netw. 2021 Feb;134:64-75. doi: 10.1016/j.neunet.2020.11.005. Epub 2020 Nov 27.
This work is aimed to study experimental and theoretical approaches for searching effective local training rules for unsupervised pattern recognition by high-performance memristor-based Spiking Neural Networks (SNNs). First, the possibility of weight change using Spike-Timing-Dependent Plasticity (STDP) is demonstrated with a pair of hardware analog neurons connected through a (CoFeB)(LiNbO) nanocomposite memristor. Next, the learning convergence to a solution of binary clusterization task is analyzed in a wide range of memristive STDP parameters for a single-layer fully connected feedforward SNN. The memristive STDP behavior supplying convergence in this simple task is shown also to provide it in the handwritten digit recognition domain by the more complex SNN architecture with a Winner-Take-All competition between neurons. To investigate basic conditions necessary for training convergence, an original probabilistic generative model of a rate-based single-layer network with independent or competing neurons is built and thoroughly analyzed. The main result is a statement of "correlation growth-anticorrelation decay" principle which prompts near-optimal policy to configure model parameters. This principle is in line with requiring the binary clusterization convergence which can be defined as the necessary condition for optimal learning and used as the simple benchmark for tuning parameters of various neural network realizations with population-rate information coding. At last, a heuristic algorithm is described to experimentally find out the convergence conditions in a memristive SNN, including robustness to a device variability. Due to the generality of the proposed approach, it can be applied to a wide range of memristors and neurons of software- or hardware-based rate-coding single-layer SNNs when searching for local rules that ensure their unsupervised learning convergence in a pattern recognition task domain.
这项工作旨在研究通过高性能基于忆阻器的尖峰神经网络(SNN)进行无监督模式识别的有效局部训练规则的实验和理论方法。首先,通过一对通过(CoFeB)(LiNbO)纳米复合忆阻器连接的硬件模拟神经元,证明了使用尖峰时间依赖可塑性(STDP)改变权重的可能性。接下来,在单层全连接前馈 SNN 中,在广泛的忆阻 STDP 参数范围内分析了学习收敛到二进制聚类任务的解决方案。还表明,在这种简单任务中提供收敛的忆阻 STDP 行为也通过具有神经元间竞争的 Winner-Take-All 竞争的更复杂 SNN 架构提供手写数字识别领域的收敛。为了研究训练收敛的基本条件,构建并彻底分析了具有独立或竞争神经元的基于速率的单层网络的原始概率生成模型。主要结果是陈述“相关增长-反相关衰减”原理,该原理提示接近最优的策略来配置模型参数。该原理符合二进制聚类收敛的要求,二进制聚类收敛可以定义为最优学习的必要条件,并用作使用群体速率信息编码的各种神经网络实现的参数调整的简单基准。最后,描述了一种启发式算法,用于实验找出忆阻 SNN 中的收敛条件,包括对设备变异性的鲁棒性。由于所提出方法的通用性,当在模式识别任务域中搜索确保其无监督学习收敛的局部规则时,它可以应用于具有软件或硬件的基于速率的单层 SNN 的广泛忆阻器和神经元。