• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

迈向深度学习与神经科学的整合。

Toward an Integration of Deep Learning and Neuroscience.

作者信息

Marblestone Adam H, Wayne Greg, Kording Konrad P

机构信息

Synthetic Neurobiology Group, Massachusetts Institute of Technology, Media Lab Cambridge, MA, USA.

Google Deepmind London, UK.

出版信息

Front Comput Neurosci. 2016 Sep 14;10:94. doi: 10.3389/fncom.2016.00094. eCollection 2016.

DOI:10.3389/fncom.2016.00094
PMID:27683554
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5021692/
Abstract

Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.

摘要

神经科学一直专注于计算的详细实现,研究神经编码、动力学和神经回路。然而,在机器学习中,人工神经网络往往避开精心设计的编码、动力学或回路,转而采用对代价函数进行强力优化的方式,通常使用简单且相对统一的初始架构。机器学习领域最近出现了两个新进展,为连接这些看似不同的观点创造了机会。第一,采用了结构化架构,包括用于注意力、递归以及各种形式短期和长期记忆存储的专用系统。第二,代价函数和训练过程变得更加复杂,并且在不同层和不同时间有所变化。在此,我们依据这些观点来思考大脑。我们假设:(1)大脑对代价函数进行优化;(2)代价函数多种多样,在大脑不同位置以及发育过程中存在差异;(3)优化在与行为所带来的计算问题相匹配的预结构化架构内进行。为支持这些假设,我们认为通过多层神经元进行的一系列信用分配实现方式与我们目前对神经回路的认识相符,并且大脑的专门系统可以被解释为能够针对特定问题类别实现高效优化。这样一个由一系列相互作用的代价函数促成的异质优化系统,有助于使学习在数据利用上更加高效,并且能够精确地针对生物体的需求。我们提出了神经科学可以用来完善和检验这些假设的方向。

相似文献

1
Toward an Integration of Deep Learning and Neuroscience.迈向深度学习与神经科学的整合。
Front Comput Neurosci. 2016 Sep 14;10:94. doi: 10.3389/fncom.2016.00094. eCollection 2016.
2
Application of Meta-Heuristic Algorithms for Training Neural Networks and Deep Learning Architectures: A Comprehensive Review.元启发式算法在神经网络和深度学习架构训练中的应用:全面综述。
Neural Process Lett. 2022 Oct 31:1-104. doi: 10.1007/s11063-022-11055-6.
3
Training deep neural density estimators to identify mechanistic models of neural dynamics.训练深度神经网络密度估计器以识别神经动力学的机制模型。
Elife. 2020 Sep 17;9:e56261. doi: 10.7554/eLife.56261.
4
RatInABox, a toolkit for modelling locomotion and neuronal activity in continuous environments.盒子里的老鼠,一个用于模拟连续环境中运动和神经元活动的工具包。
Elife. 2024 Feb 9;13:e85274. doi: 10.7554/eLife.85274.
5
A deep learning framework for neuroscience.深度学习在神经科学中的应用框架。
Nat Neurosci. 2019 Nov;22(11):1761-1770. doi: 10.1038/s41593-019-0520-2. Epub 2019 Oct 28.
6
If deep learning is the answer, what is the question?如果深度学习是答案,那么问题是什么?
Nat Rev Neurosci. 2021 Jan;22(1):55-67. doi: 10.1038/s41583-020-00395-8. Epub 2020 Nov 16.
7
Towards deep learning with segregated dendrites.走向具有分离树突的深度学习。
Elife. 2017 Dec 5;6:e22901. doi: 10.7554/eLife.22901.
8
Learning dynamics of deep linear networks with multiple pathways.具有多条路径的深度线性网络的学习动态
Adv Neural Inf Process Syst. 2022 Dec;35:34064-34076.
9
MABAL: a Novel Deep-Learning Architecture for Machine-Assisted Bone Age Labeling.MABAL:一种用于机器辅助骨龄标注的新型深度学习架构。
J Digit Imaging. 2018 Aug;31(4):513-519. doi: 10.1007/s10278-018-0053-3.
10
Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research.自然与人工智能:人工智能与神经科学研究的相互作用简介。
Neural Netw. 2021 Dec;144:603-613. doi: 10.1016/j.neunet.2021.09.018. Epub 2021 Sep 28.

引用本文的文献

1
NeuroQ: Quantum-Inspired Brain Emulation.NeuroQ:受量子启发的大脑模拟
Biomimetics (Basel). 2025 Aug 7;10(8):516. doi: 10.3390/biomimetics10080516.
2
High-level visual representations in the human brain are aligned with large language models.人类大脑中的高级视觉表征与大语言模型相一致。
Nat Mach Intell. 2025;7(8):1220-1234. doi: 10.1038/s42256-025-01072-0. Epub 2025 Aug 7.
3
Connectome analysis of a cerebellum-like circuit for sensory prediction.用于感觉预测的类小脑回路的连接组分析。

本文引用的文献

1
Learning Invariance from Transformation Sequences.从变换序列中学习不变性。
Neural Comput. 1991 Summer;3(2):194-200. doi: 10.1162/neco.1991.3.2.194.
2
Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.在使用非概率反馈训练的通用神经网络中进行高效概率推理。
Nat Commun. 2017 Jul 26;8(1):138. doi: 10.1038/s41467-017-00181-8.
3
Could a Neuroscientist Understand a Microprocessor?神经科学家能理解微处理器吗?
bioRxiv. 2025 Jul 3:2025.07.03.662989. doi: 10.1101/2025.07.03.662989.
4
Irrationality in humans and creativity in AI.人类的非理性与人工智能的创造力。
Front Artif Intell. 2025 Jun 20;8:1579704. doi: 10.3389/frai.2025.1579704. eCollection 2025.
5
A normative principle governing memory transfer in cerebellar motor learning.一条支配小脑运动学习中记忆转移的规范性原则。
Nat Commun. 2025 Jul 1;16(1):5479. doi: 10.1038/s41467-025-60511-z.
6
STIED: a deep learning model for the spatiotemporal detection of focal interictal epileptiform discharges with MEG.STIED:一种用于利用脑磁图进行局灶性发作间期癫痫样放电的时空检测的深度学习模型。
Sci Rep. 2025 Jul 1;15(1):21017. doi: 10.1038/s41598-025-03880-1.
7
SpyDen: simplifying molecular and structural analysis across spines and dendrites.SpyDen:简化跨棘突和树突的分子与结构分析
Bioinformatics. 2025 Jul 1;41(7). doi: 10.1093/bioinformatics/btaf339.
8
CONSTRUCTING BIOLOGICALLY CONSTRAINED RNNS VIA DALE'S BACKPROP AND TOPOLOGICALLY-INFORMED PRUNING.通过戴尔反向传播和拓扑信息剪枝构建生物约束循环神经网络
bioRxiv. 2025 Jan 13:2025.01.09.632231. doi: 10.1101/2025.01.09.632231.
9
Assessment of human emotional reactions to visual stimuli "deep-dreamed" by artificial neural networks.对人工神经网络“深度梦境”生成的视觉刺激的人类情绪反应评估。
Front Psychol. 2024 Dec 24;15:1509392. doi: 10.3389/fpsyg.2024.1509392. eCollection 2024.
10
Exploring neural architectures for simultaneously recognizing multiple visual attributes.探索用于同时识别多个视觉属性的神经架构。
Sci Rep. 2024 Dec 3;14(1):30036. doi: 10.1038/s41598-024-80679-6.
PLoS Comput Biol. 2017 Jan 12;13(1):e1005268. doi: 10.1371/journal.pcbi.1005268. eCollection 2017 Jan.
4
Building machines that learn and think like people.建造像人一样学习和思考的机器。
Behav Brain Sci. 2017 Jan;40:e253. doi: 10.1017/S0140525X16001837. Epub 2016 Nov 24.
5
Random synaptic feedback weights support error backpropagation for deep learning.随机突触反馈权重支持深度学习的误差反向传播。
Nat Commun. 2016 Nov 8;7:13276. doi: 10.1038/ncomms13276.
6
Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation.非线性赫布学习作为感受野形成的统一原则
PLoS Comput Biol. 2016 Sep 30;12(9):e1005070. doi: 10.1371/journal.pcbi.1005070. eCollection 2016 Sep.
7
Continuous Online Sequence Learning with an Unsupervised Neural Network Model.使用无监督神经网络模型的连续在线序列学习
Neural Comput. 2016 Nov;28(11):2474-2504. doi: 10.1162/NECO_a_00893. Epub 2016 Sep 14.
8
The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.朴素效用演算:常识心理学的基础计算原则。
Trends Cogn Sci. 2016 Aug;20(8):589-604. doi: 10.1016/j.tics.2016.05.011. Epub 2016 Jul 4.
9
Prospective Coding by Spiking Neurons.脉冲神经元的前瞻性编码。
PLoS Comput Biol. 2016 Jun 24;12(6):e1005003. doi: 10.1371/journal.pcbi.1005003. eCollection 2016 Jun.
10
What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated.智能体需要什么样的学习系统?更新后的补充学习系统理论。
Trends Cogn Sci. 2016 Jul;20(7):512-534. doi: 10.1016/j.tics.2016.05.004.