Zhang Tielin, Cheng Xiang, Jia Shuncheng, Poo Mu-Ming, Zeng Yi, Xu Bo
Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
Sci Adv. 2021 Oct 22;7(43):eabh0146. doi: 10.1126/sciadv.abh0146. Epub 2021 Oct 20.
Many synaptic plasticity rules found in natural circuits have not been incorporated into artificial neural networks (ANNs). We showed that incorporating a nonlocal feature of synaptic plasticity found in natural neural networks, whereby synaptic modification at output synapses of a neuron backpropagates to its input synapses made by upstream neurons, markedly reduced the computational cost without affecting the accuracy of spiking neural networks (SNNs) and ANNs in supervised learning for three benchmark tasks. For SNNs, synaptic modification at output neurons generated by spike timing–dependent plasticity was allowed to self-propagate to limited upstream synapses. For ANNs, modified synaptic weights via conventional backpropagation algorithm at output neurons self-backpropagated to limited upstream synapses. Such self-propagating plasticity may produce coordinated synaptic modifications across neuronal layers that reduce computational cost.
在自然神经回路中发现的许多突触可塑性规则尚未被纳入人工神经网络(ANN)。我们发现,将自然神经网络中突触可塑性的一种非局部特征纳入其中,即神经元输出突触处的突触修饰会反向传播到上游神经元形成的输入突触,在不影响尖峰神经网络(SNN)和ANN在三个基准任务的监督学习中的准确性的情况下,显著降低了计算成本。对于SNN,由基于脉冲时间的可塑性在输出神经元处产生的突触修饰被允许自传播到有限数量的上游突触。对于ANN,通过传统反向传播算法在输出神经元处修改的突触权重自反向传播到有限数量的上游突触。这种自传播可塑性可能会在神经元层之间产生协调的突触修饰,从而降低计算成本。