• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

时间反向传播与大脑。

Backpropagation through time and the brain.

机构信息

DeepMind, London, UK; UCL, UK.

DeepMind, London, UK.

出版信息

Curr Opin Neurobiol. 2019 Apr;55:82-89. doi: 10.1016/j.conb.2019.01.011. Epub 2019 Mar 7.

DOI:10.1016/j.conb.2019.01.011
PMID:30851654
Abstract

It has long been speculated that the backpropagation-of-error algorithm (backprop) may be a model of how the brain learns. Backpropagation-through-time (BPTT) is the canonical temporal-analogue to backprop used to assign credit in recurrent neural networks in machine learning, but there's even less conviction about whether BPTT has anything to do with the brain. Even in machine learning the use of BPTT in classic neural network architectures has proven insufficient for some challenging temporal credit assignment (TCA) problems that we know the brain is capable of solving. Nonetheless, recent work in machine learning has made progress in solving difficult TCA problems by employing novel memory-based and attention-based architectures and algorithms, some of which are brain inspired. Importantly, these recent machine learning methods have been developed in the context of, and with reference to BPTT, and thus serve to strengthen BPTT's position as a useful normative guide for thinking about temporal credit assignment in artificial and biological systems alike.

摘要

长期以来,人们一直推测误差反向传播算法(backprop)可能是大脑学习的一种模型。时间反向传播(BPTT)是反向传播的典型时间模拟,用于在机器学习中的递归神经网络中分配信用,但对于 BPTT 是否与大脑有关,人们的信心甚至更少。即使在机器学习中,在经典神经网络架构中使用 BPTT 已被证明不足以解决一些具有挑战性的时间信用分配(TCA)问题,而我们知道大脑能够解决这些问题。尽管如此,机器学习领域的最新工作通过采用基于记忆和基于注意力的新型架构和算法在解决困难的 TCA 问题方面取得了进展,其中一些是受大脑启发的。重要的是,这些最近的机器学习方法是在 BPTT 的背景下和参考 BPTT 开发的,因此有助于加强 BPTT 作为思考人工和生物系统中时间信用分配的有用规范指南的地位。

相似文献

1
Backpropagation through time and the brain.时间反向传播与大脑。
Curr Opin Neurobiol. 2019 Apr;55:82-89. doi: 10.1016/j.conb.2019.01.011. Epub 2019 Mar 7.
2
Backpropagation algorithms and Reservoir Computing in Recurrent Neural Networks for the forecasting of complex spatiotemporal dynamics.反向传播算法和递归神经网络中的 Reservoir Computing 在复杂时空动力学预测中的应用。
Neural Netw. 2020 Jun;126:191-217. doi: 10.1016/j.neunet.2020.02.016. Epub 2020 Mar 21.
3
Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs.预测编码可沿任意计算图逼近反向传播。
Neural Comput. 2022 May 19;34(6):1329-1368. doi: 10.1162/neco_a_01497.
4
Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences.在神经形态视觉数据集上比较 SNNs 和 RNNs:相似性和差异。
Neural Netw. 2020 Dec;132:108-120. doi: 10.1016/j.neunet.2020.08.001. Epub 2020 Aug 17.
5
Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations.通过局部对齐分布式表示来实现递归神经网络的持续学习。
IEEE Trans Neural Netw Learn Syst. 2020 Oct;31(10):4267-4278. doi: 10.1109/TNNLS.2019.2953622. Epub 2020 Jan 20.
6
Backpropagation and the brain.反向传播与大脑。
Nat Rev Neurosci. 2020 Jun;21(6):335-346. doi: 10.1038/s41583-020-0277-3. Epub 2020 Apr 17.
7
Feedforward Chemical Neural Network: An In Silico Chemical System That Learns xor.前馈化学神经网络:一种学习异或运算的计算机模拟化学系统。
Artif Life. 2017 Summer;23(3):295-317. doi: 10.1162/ARTL_a_00233.
8
Online Spatio-Temporal Learning in Deep Neural Networks.深度神经网络中的在线时空学习
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):8894-8908. doi: 10.1109/TNNLS.2022.3153985. Epub 2023 Oct 27.
9
Backpropagation algorithms for a broad class of dynamic networks.适用于一大类动态网络的反向传播算法。
IEEE Trans Neural Netw. 2007 Jan;18(1):14-27. doi: 10.1109/TNN.2006.882371.
10
Inferring neural activity before plasticity as a foundation for learning beyond backpropagation.在反向传播之外的学习中,将可塑性之前的神经活动推断为学习的基础。
Nat Neurosci. 2024 Feb;27(2):348-358. doi: 10.1038/s41593-023-01514-1. Epub 2024 Jan 3.

引用本文的文献

1
Brain-like variational inference.类脑变分推理
ArXiv. 2025 May 16:arXiv:2410.19315v2.
2
Biologically plausible gated recurrent neural networks for working memory and learning-to-learn.用于工作记忆和学习学习的具有生物学合理性的门控循环神经网络。
PLoS One. 2024 Dec 31;19(12):e0316453. doi: 10.1371/journal.pone.0316453. eCollection 2024.
3
Parallel development of object recognition in newborn chicks and deep neural networks.新生雏鸡与深度神经网络中物体识别的并行发展
PLoS Comput Biol. 2024 Dec 2;20(12):e1012600. doi: 10.1371/journal.pcbi.1012600. eCollection 2024 Dec.
4
Brain-like Flexible Visual Inference by Harnessing Feedback-Feedforward Alignment.通过利用反馈-前馈对齐实现类脑灵活视觉推理
Adv Neural Inf Process Syst. 2023 Dec;37:56979-56997. Epub 2024 May 30.
5
Gradient-free training of recurrent neural networks using random perturbations.使用随机扰动对循环神经网络进行无梯度训练。
Front Neurosci. 2024 Jul 10;18:1439155. doi: 10.3389/fnins.2024.1439155. eCollection 2024.
6
Transition to chaos separates learning regimes and relates to measure of consciousness in recurrent neural networks.向混沌的转变分隔了学习模式,并与循环神经网络中的意识度量相关。
bioRxiv. 2024 May 15:2024.05.15.594236. doi: 10.1101/2024.05.15.594236.
7
Recurrent neural networks that learn multi-step visual routines with reinforcement learning.基于强化学习的循环神经网络,可用于学习多步骤视觉常规。
PLoS Comput Biol. 2024 Apr 29;20(4):e1012030. doi: 10.1371/journal.pcbi.1012030. eCollection 2024 Apr.
8
Design of oscillatory neural networks by machine learning.基于机器学习的振荡神经网络设计。
Front Neurosci. 2024 Mar 4;18:1307525. doi: 10.3389/fnins.2024.1307525. eCollection 2024.
9
Brain-like Flexible Visual Inference by Harnessing Feedback-Feedforward Alignment.通过利用反馈-前馈对齐实现类脑灵活视觉推理
ArXiv. 2023 Oct 31:arXiv:2310.20599v1.
10
Deep treasury management for banks.银行的深度资金管理。
Front Artif Intell. 2023 Mar 22;6:1120297. doi: 10.3389/frai.2023.1120297. eCollection 2023.