College of Computer Science, Sichuan University, Chengdu 610065, China
College of Computer Science, Zhejiang University of Technology, Hangzhou 310014, China
Neural Comput. 2021 Aug 19;33(9):2439-2472. doi: 10.1162/neco_a_01423.
Learning new concepts rapidly from a few examples is an open issue in spike-based machine learning. This few-shot learning imposes substantial challenges to the current learning methodologies of spiking neuron networks (SNNs) due to the lack of task-related priori knowledge. The recent learning-to-learn (L2L) approach allows SNNs to acquire priori knowledge through example-level learning and task-level optimization. However, existing L2L-based frameworks do not target the neural dynamics (i.e., neuronal and synaptic parameter changes) on different timescales. This diversity of temporal dynamics is an important attribute in spike-based learning, which facilitates the networks to rapidly acquire knowledge from very few examples and gradually integrate this knowledge. In this work, we consider the neural dynamics on various timescales and provide a multi-timescale optimization (MTSO) framework for SNNs. This framework introduces an adaptive-gated LSTM to accommodate two different timescales of neural dynamics: short-term learning and long-term evolution. Short-term learning is a fast knowledge acquisition process achieved by a novel surrogate gradient online learning (SGOL) algorithm, where the LSTM guides gradient updating of SNN on a short timescale through an adaptive learning rate and weight decay gating. The long-term evolution aims to slowly integrate acquired knowledge and form a priori, which can be achieved by optimizing the LSTM guidance process to tune SNN parameters on a long timescale. Experimental results demonstrate that the collaborative optimization of multi-timescale neural dynamics can make SNNs achieve promising performance for the few-shot learning tasks.
从少数几个示例中快速学习新的概念是 Spike 机器学习中的一个开放性问题。由于缺乏与任务相关的先验知识,这种少样本学习对 Spike 神经元网络 (SNN) 的当前学习方法提出了重大挑战。最近的学习学习 (L2L) 方法允许 SNN 通过示例级学习和任务级优化来获取先验知识。然而,现有的基于 L2L 的框架并没有针对不同时间尺度的神经动力学(即神经元和突触参数变化)。这种时间动力学的多样性是 Spike 学习中的一个重要属性,它使网络能够从很少的示例中快速获取知识,并逐渐整合这些知识。在这项工作中,我们考虑了各种时间尺度上的神经动力学,并为 SNN 提供了一个多时间尺度优化 (MTSO) 框架。该框架引入了自适应门控 LSTM 来适应神经动力学的两个不同时间尺度:短期学习和长期演化。短期学习是通过一种新的代理梯度在线学习 (SGOL) 算法实现的快速知识获取过程,其中 LSTM 通过自适应学习率和权重衰减门控来指导 SNN 在短时间尺度上的梯度更新。长期演化旨在缓慢地整合获取的知识并形成先验知识,这可以通过优化 LSTM 指导过程来实现,以调整 SNN 参数在长时间尺度上。实验结果表明,多时间尺度神经动力学的协同优化可以使 SNN 在少样本学习任务中取得有希望的性能。