Suppr超能文献

大脑能进行反向传播吗?——预测编码网络中反向传播的精确实现。

Can the Brain Do Backpropagation? -Exact Implementation of Backpropagation in Predictive Coding Networks.

作者信息

Song Yuhang, Lukasiewicz Thomas, Xu Zhenghua, Bogacz Rafal

机构信息

Department of Computer Science, University of Oxford, UK.

State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China.

出版信息

Adv Neural Inf Process Syst. 2020;33:22566-22579.

Abstract

Backpropagation (BP) has been the most successful algorithm used to train artificial neural networks. However, there are several gaps between BP and learning in biologically plausible neuronal networks of the brain (learning in the brain, or simply BL, for short), in particular, (1) it has been unclear to date, if BP can be implemented exactly via BL, (2) there is a lack of local plasticity in BP, i.e., weight updates require information that is not locally available, while BL utilizes only locally available information, and (3) there is a lack of autonomy in BP, i.e., some external control over the neural network is required (e.g., switching between prediction and learning stages requires changes to dynamics and synaptic plasticity rules), while BL works fully autonomously. Bridging such gaps, i.e., understanding how BP can be approximated by BL, has been of major interest in both neuroscience and machine learning. Despite tremendous efforts, however, no previous model has bridged the gaps at a degree of demonstrating an equivalence to BP, instead, only approximations to BP have been shown. Here, we present for the first time a framework within BL that bridges the above crucial gaps. We propose a BL model that (1) produces updates of the neural weights as BP, while (2) employing local plasticity, i.e., all neurons perform only local computations, done simultaneously. We then modify it to an alternative BL model that (3) also works fully autonomously. Overall, our work provides important evidence for the debate on the long-disputed question whether the brain can perform BP.

摘要

反向传播(BP)一直是用于训练人工神经网络的最成功算法。然而,BP与大脑中具有生物学合理性的神经元网络中的学习(简称为大脑学习,或简称为BL)之间存在一些差距,特别是:(1)迄今为止尚不清楚BP是否可以通过BL精确实现;(2)BP缺乏局部可塑性,即权重更新需要局部不可用的信息,而BL仅利用局部可用信息;(3)BP缺乏自主性,即需要对神经网络进行一些外部控制(例如,在预测和学习阶段之间切换需要改变动力学和突触可塑性规则),而BL完全自主运行。弥合这些差距,即理解BL如何近似BP,一直是神经科学和机器学习领域的主要研究兴趣。然而,尽管付出了巨大努力,但以前没有模型能够在与BP等效的程度上弥合这些差距,相反,只展示了对BP的近似。在这里,我们首次提出了一个在BL框架内弥合上述关键差距的框架。我们提出了一个BL模型,该模型(1)像BP一样产生神经权重的更新,同时(2)采用局部可塑性,即所有神经元仅执行局部计算,并且是同时进行的。然后,我们将其修改为另一个BL模型,该模型(3)也能完全自主运行。总的来说,我们的工作为关于长期存在争议的问题——大脑是否能够执行BP——的辩论提供了重要证据。

相似文献

2
Reverse Differentiation via Predictive Coding.通过预测编码进行反向分化。
Proc AAAI Conf Artif Intell. 2022 Jun 28;36(7):8150-8158. doi: 10.1609/aaai.v36i7.20788.
9
Learning cortical hierarchies with temporal Hebbian updates.通过时间赫布更新学习皮层层次结构。
Front Comput Neurosci. 2023 May 24;17:1136010. doi: 10.3389/fncom.2023.1136010. eCollection 2023.

引用本文的文献

3
Predictive coding with spiking neurons and feedforward gist signaling.基于脉冲神经元和前馈主旨信号的预测编码。
Front Comput Neurosci. 2024 Apr 12;18:1338280. doi: 10.3389/fncom.2024.1338280. eCollection 2024.
5
Predictive coding networks for temporal prediction.用于时间预测的预测编码网络。
PLoS Comput Biol. 2024 Apr 1;20(4):e1011183. doi: 10.1371/journal.pcbi.1011183. eCollection 2024 Apr.
6
A predictive coding model of the N400.N400的预测编码模型。
Cognition. 2024 May;246:105755. doi: 10.1016/j.cognition.2024.105755. Epub 2024 Feb 29.
10
A role for cortical interneurons as adversarial discriminators.皮层中间神经元作为对抗性鉴别器的作用。
PLoS Comput Biol. 2023 Sep 28;19(9):e1011484. doi: 10.1371/journal.pcbi.1011484. eCollection 2023 Sep.

本文引用的文献

2
Backpropagation and the brain.反向传播与大脑。
Nat Rev Neurosci. 2020 Jun;21(6):335-346. doi: 10.1038/s41583-020-0277-3. Epub 2020 Apr 17.
4
A deep learning framework for neuroscience.深度学习在神经科学中的应用框架。
Nat Neurosci. 2019 Nov;22(11):1761-1770. doi: 10.1038/s41593-019-0520-2. Epub 2019 Oct 28.
6
Deep Learning With Asymmetric Connections and Hebbian Updates.具有非对称连接和赫布更新的深度学习
Front Comput Neurosci. 2019 Apr 4;13:18. doi: 10.3389/fncom.2019.00018. eCollection 2019.
7
Unsupervised learning by competing hidden units.无监督竞争型隐单元学习。
Proc Natl Acad Sci U S A. 2019 Apr 16;116(16):7723-7731. doi: 10.1073/pnas.1820458116. Epub 2019 Mar 29.
8
Theories of Error Back-Propagation in the Brain.大脑中的误差反向传播理论。
Trends Cogn Sci. 2019 Mar;23(3):235-250. doi: 10.1016/j.tics.2018.12.005. Epub 2019 Jan 28.
9
Deep Supervised Learning Using Local Errors.使用局部误差的深度监督学习
Front Neurosci. 2018 Aug 31;12:608. doi: 10.3389/fnins.2018.00608. eCollection 2018.
10
Dendritic solutions to the credit assignment problem.树突状解决信用分配问题。
Curr Opin Neurobiol. 2019 Feb;54:28-36. doi: 10.1016/j.conb.2018.08.003. Epub 2018 Sep 8.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验