Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, 46556, USA.
Nat Commun. 2023 Mar 31;14(1):1805. doi: 10.1038/s41467-023-37562-1.
Backpropagation is widely used to train artificial neural networks, but its relationship to synaptic plasticity in the brain is unknown. Some biological models of backpropagation rely on feedback projections that are symmetric with feedforward connections, but experiments do not corroborate the existence of such symmetric backward connectivity. Random feedback alignment offers an alternative model in which errors are propagated backward through fixed, random backward connections. This approach successfully trains shallow models, but learns slowly and does not perform well with deeper models or online learning. In this study, we develop a meta-learning approach to discover interpretable, biologically plausible plasticity rules that improve online learning performance with fixed random feedback connections. The resulting plasticity rules show improved online training of deep models in the low data regime. Our results highlight the potential of meta-learning to discover effective, interpretable learning rules satisfying biological constraints.
反向传播被广泛用于训练人工神经网络,但它与大脑中的突触可塑性的关系尚不清楚。一些反向传播的生物模型依赖于与前馈连接对称的反馈投影,但实验并不证实存在这种对称的后向连接。随机反馈对齐提供了另一种模型,其中误差通过固定的随机后向连接向后传播。这种方法可以成功地训练浅层模型,但学习速度较慢,并且在更深层的模型或在线学习中表现不佳。在这项研究中,我们开发了一种元学习方法,以发现可解释的、具有生物学意义的可塑性规则,这些规则可以通过固定的随机反馈连接来提高在线学习性能。得到的可塑性规则显示了在数据量较少的情况下对深层模型的在线训练的改进。我们的结果强调了元学习发现有效、可解释的学习规则以满足生物学约束的潜力。