Department of Precision Instrument, Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Optical Memory National Engineering Research Center, Tsinghua University, Beijing, China.
Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China.
Nat Commun. 2022 Jan 10;13(1):65. doi: 10.1038/s41467-021-27653-2.
There are two principle approaches for learning in artificial intelligence: error-driven global learning and neuroscience-oriented local learning. Integrating them into one network may provide complementary learning capabilities for versatile learning scenarios. At the same time, neuromorphic computing holds great promise, but still needs plenty of useful algorithms and algorithm-hardware co-designs to fully exploit its advantages. Here, we present a neuromorphic global-local synergic learning model by introducing a brain-inspired meta-learning paradigm and a differentiable spiking model incorporating neuronal dynamics and synaptic plasticity. It can meta-learn local plasticity and receive top-down supervision information for multiscale learning. We demonstrate the advantages of this model in multiple different tasks, including few-shot learning, continual learning, and fault-tolerance learning in neuromorphic vision sensors. It achieves significantly higher performance than single-learning methods. We further implement the model in the Tianjic neuromorphic platform by exploiting algorithm-hardware co-designs and prove that the model can fully utilize neuromorphic many-core architecture to develop hybrid computation paradigm.
基于错误的全局学习和面向神经科学的局部学习。将它们集成到一个网络中,可能为各种学习场景提供互补的学习能力。同时,神经形态计算具有很大的潜力,但仍需要大量有用的算法和算法-硬件协同设计来充分发挥其优势。在这里,我们通过引入一种受大脑启发的元学习范例和一个包含神经元动力学和突触可塑性的可微分尖峰模型,提出了一种神经形态全局-局部协同学习模型。它可以元学习局部可塑性,并接收来自多尺度学习的自上而下的监督信息。我们在多个不同的任务中展示了这个模型的优势,包括少样本学习、连续学习和神经形态视觉传感器中的容错学习。它的性能明显优于单一学习方法。我们进一步通过算法-硬件协同设计在 Tianjic 神经形态平台上实现了该模型,并证明了该模型可以充分利用神经形态多核架构来开发混合计算范例。