• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

语言的神经结构:综合建模趋向于预测处理。

The neural architecture of language: Integrative modeling converges on predictive processing.

机构信息

Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;

McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139.

出版信息

Proc Natl Acad Sci U S A. 2021 Nov 9;118(45). doi: 10.1073/pnas.2105646118.

DOI:10.1073/pnas.2105646118
PMID:34737231
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8694052/
Abstract

The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models. By revealing trends across models, this approach yields novel insights into cognitive and neural mechanisms in the target domain. We here present a systematic study taking this approach to higher-level cognition: human language processing, our species' signature cognitive skill. We find that the most powerful "transformer" models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities (functional MRI and electrocorticography). Models' neural fits ("brain score") and fits to behavioral responses are both strongly correlated with model accuracy on the next-word prediction task (but not other language tasks). Model architecture appears to substantially contribute to neural fit. These results provide computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the human brain.

摘要

感知神经科学最近通过一种整合建模方法发生了革命性变化,该方法将计算、大脑功能和行为在多个数据集和多个计算模型之间联系起来。通过揭示模型之间的趋势,这种方法为目标领域的认知和神经机制提供了新的见解。我们在这里提出了一项系统的研究,采用这种方法来研究更高层次的认知:人类语言处理,我们物种的标志性认知技能。我们发现,最强大的“转换器”模型可以预测句子引起的神经反应的可解释方差的近 100%,并且可以在不同的数据集和成像模式(功能磁共振成像和脑电描记术)之间进行概括。模型的神经拟合(“大脑得分”)和对行为反应的拟合与模型在预测下一个单词任务上的准确性高度相关(但与其他语言任务无关)。模型架构似乎对神经拟合有很大的贡献。这些结果提供了计算上明确的证据,表明预测处理从根本上塑造了人类大脑中的语言理解机制。

相似文献

1
The neural architecture of language: Integrative modeling converges on predictive processing.语言的神经结构:综合建模趋向于预测处理。
Proc Natl Acad Sci U S A. 2021 Nov 9;118(45). doi: 10.1073/pnas.2105646118.
2
Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience.用于认知神经科学中模拟模型快速推断的似然逼近网络 (LANs)。
Elife. 2021 Apr 6;10:e65074. doi: 10.7554/eLife.65074.
3
Deep Artificial Neural Networks Reveal a Distributed Cortical Network Encoding Propositional Sentence-Level Meaning.深度人工神经网络揭示命题句级意义的分布式皮层网络编码。
J Neurosci. 2021 May 5;41(18):4100-4119. doi: 10.1523/JNEUROSCI.1152-20.2021. Epub 2021 Mar 22.
4
Neural Encoding and Decoding With Distributed Sentence Representations.分布式句子表示的神经编码和解码。
IEEE Trans Neural Netw Learn Syst. 2021 Feb;32(2):589-603. doi: 10.1109/TNNLS.2020.3027595. Epub 2021 Feb 4.
5
Neural network processing of natural language: II. Towards a unified model of corticostriatal function in learning sentence comprehension and non-linguistic sequencing.自然语言的神经网络处理:II. 迈向学习句子理解和非语言序列中皮质纹状体功能的统一模型。
Brain Lang. 2009 May-Jun;109(2-3):80-92. doi: 10.1016/j.bandl.2008.08.002. Epub 2008 Oct 5.
6
Probabilistic language models in cognitive neuroscience: Promises and pitfalls.认知神经科学中的概率语言模型:前景与挑战。
Neurosci Biobehav Rev. 2017 Dec;83:579-588. doi: 10.1016/j.neubiorev.2017.09.001. Epub 2017 Sep 5.
7
Shared functional specialization in transformer-based language models and the human brain.基于变压器的语言模型和人类大脑的功能专业化共享。
Nat Commun. 2024 Jun 29;15(1):5523. doi: 10.1038/s41467-024-49173-5.
8
Lack of selectivity for syntax relative to word meanings throughout the language network.在整个语言网络中,相对于词义而言,对句法缺乏选择性。
Cognition. 2020 Oct;203:104348. doi: 10.1016/j.cognition.2020.104348. Epub 2020 Jun 20.
9
Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence.综合基准测试以推进人类智能的神经机制模型。
Neuron. 2020 Nov 11;108(3):413-423. doi: 10.1016/j.neuron.2020.07.040. Epub 2020 Sep 11.
10
Neural correlates of metaphor processing: the roles of figurativeness, familiarity and difficulty.隐喻加工的神经关联:形象性、熟悉度和难度的作用。
Brain Cogn. 2009 Dec;71(3):375-86. doi: 10.1016/j.bandc.2009.06.001. Epub 2009 Jul 7.

引用本文的文献

1
Active use of latent tree-structured sentence representation in humans and large language models.人类和大语言模型中潜在树状结构句子表征的积极应用。
Nat Hum Behav. 2025 Sep 10. doi: 10.1038/s41562-025-02297-0.
2
Personalized Language Training and Bi-Hemispheric tDCS Improve Language Connectivity in Chronic Aphasia: A fMRI Case Study.个性化语言训练和双侧经颅直流电刺激改善慢性失语症患者的语言连接性:一项功能磁共振成像案例研究
J Pers Med. 2025 Aug 3;15(8):352. doi: 10.3390/jpm15080352.
3
High-level visual representations in the human brain are aligned with large language models.人类大脑中的高级视觉表征与大语言模型相一致。
Nat Mach Intell. 2025;7(8):1220-1234. doi: 10.1038/s42256-025-01072-0. Epub 2025 Aug 7.
4
A systematic evaluation of Dutch large language models' surprisal estimates in sentence, paragraph and book reading.对荷兰大语言模型在句子、段落和书籍阅读中的意外度估计进行的系统评估。
Behav Res Methods. 2025 Aug 18;57(9):266. doi: 10.3758/s13428-025-02774-4.
5
Semantic composition in experimental and naturalistic paradigms.实验范式和自然主义范式中的语义合成
Imaging Neurosci (Camb). 2024 Jan 22;2. doi: 10.1162/imag_a_00072. eCollection 2024.
6
From brain to education through machine learning: Predicting literacy and numeracy skills from neuroimaging data.从大脑到教育:通过机器学习,利用神经影像数据预测读写和计算能力
Imaging Neurosci (Camb). 2024 Jul 3;2. doi: 10.1162/imag_a_00219. eCollection 2024.
7
Recurrent neural networks as neuro-computational models of human speech recognition.作为人类语音识别神经计算模型的循环神经网络。
PLoS Comput Biol. 2025 Jul 28;21(7):e1013244. doi: 10.1371/journal.pcbi.1013244. eCollection 2025 Jul.
8
Computational Sentence-Level Metrics of Reading Speed and Its Ramifications for Sentence Comprehension.阅读速度的计算句子级指标及其对句子理解的影响。
Cogn Sci. 2025 Jul;49(7):e70092. doi: 10.1111/cogs.70092.
9
Emergence of a temporal processing gradient from naturalistic inputs and network connectivity.从自然主义输入和网络连接中出现时间处理梯度。
Proc Natl Acad Sci U S A. 2025 Jul 15;122(28):e2420105122. doi: 10.1073/pnas.2420105122. Epub 2025 Jul 9.
10
The "Podcast" ECoG dataset for modeling neural activity during natural language comprehension.用于在自然语言理解过程中对神经活动进行建模的“播客”脑电图数据集。
Sci Data. 2025 Jul 3;12(1):1135. doi: 10.1038/s41597-025-05462-2.

本文引用的文献

1
Composition is the Core Driver of the Language-selective Network.成分是语言选择网络的核心驱动因素。
Neurobiol Lang (Camb). 2020 Mar 1;1(1):104-134. doi: 10.1162/nol_a_00005. eCollection 2020.
2
Incremental Language Comprehension Difficulty Predicts Activity in the Language Network but Not the Multiple Demand Network.语言理解难度的逐渐增加可预测语言网络的活动,但不能预测多重需求网络的活动。
Cereb Cortex. 2021 Jul 29;31(9):4006-4023. doi: 10.1093/cercor/bhab065.
3
What Limits Our Capacity to Process Nested Long-Range Dependencies in Sentence Comprehension?是什么限制了我们在句子理解中处理嵌套长距离依存关系的能力?
Entropy (Basel). 2020 Apr 16;22(4):446. doi: 10.3390/e22040446.
4
Controversial stimuli: Pitting neural networks against each other as models of human cognition.有争议的刺激:将神经网络作为人类认知模型进行相互竞争。
Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29330-29337. doi: 10.1073/pnas.1912334117.
5
Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence.综合基准测试以推进人类智能的神经机制模型。
Neuron. 2020 Nov 11;108(3):413-423. doi: 10.1016/j.neuron.2020.07.040. Epub 2020 Sep 11.
6
A map of object space in primate inferotemporal cortex.灵长类动物下颞叶皮层的客体空间图谱。
Nature. 2020 Jul;583(7814):103-108. doi: 10.1038/s41586-020-2350-5. Epub 2020 Jun 3.
7
Lossy-Context Surprisal: An Information-Theoretic Model of Memory Effects in Sentence Processing.有损失语境惊讶:句子处理中记忆效应的一种信息论模型。
Cogn Sci. 2020 Mar;44(3):e12814. doi: 10.1111/cogs.12814.
8
Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.直接契合自然:生物和人工神经网络的进化视角。
Neuron. 2020 Feb 5;105(3):416-434. doi: 10.1016/j.neuron.2019.12.002.
9
fMRI reveals language-specific predictive coding during naturalistic sentence comprehension.功能磁共振成像揭示了自然语言句子理解过程中的语言特异性预测编码。
Neuropsychologia. 2020 Feb 17;138:107307. doi: 10.1016/j.neuropsychologia.2019.107307. Epub 2019 Dec 24.
10
The neurobiology of language beyond single-word processing.语言的神经生物学:超越单字处理。
Science. 2019 Oct 4;366(6461):55-58. doi: 10.1126/science.aax0289.