Suppr超能文献

语言的神经结构:综合建模趋向于预测处理。

The neural architecture of language: Integrative modeling converges on predictive processing.

机构信息

Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;

McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139.

出版信息

Proc Natl Acad Sci U S A. 2021 Nov 9;118(45). doi: 10.1073/pnas.2105646118.

Abstract

The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models. By revealing trends across models, this approach yields novel insights into cognitive and neural mechanisms in the target domain. We here present a systematic study taking this approach to higher-level cognition: human language processing, our species' signature cognitive skill. We find that the most powerful "transformer" models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities (functional MRI and electrocorticography). Models' neural fits ("brain score") and fits to behavioral responses are both strongly correlated with model accuracy on the next-word prediction task (but not other language tasks). Model architecture appears to substantially contribute to neural fit. These results provide computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the human brain.

摘要

感知神经科学最近通过一种整合建模方法发生了革命性变化,该方法将计算、大脑功能和行为在多个数据集和多个计算模型之间联系起来。通过揭示模型之间的趋势,这种方法为目标领域的认知和神经机制提供了新的见解。我们在这里提出了一项系统的研究,采用这种方法来研究更高层次的认知:人类语言处理,我们物种的标志性认知技能。我们发现,最强大的“转换器”模型可以预测句子引起的神经反应的可解释方差的近 100%,并且可以在不同的数据集和成像模式(功能磁共振成像和脑电描记术)之间进行概括。模型的神经拟合(“大脑得分”)和对行为反应的拟合与模型在预测下一个单词任务上的准确性高度相关(但与其他语言任务无关)。模型架构似乎对神经拟合有很大的贡献。这些结果提供了计算上明确的证据,表明预测处理从根本上塑造了人类大脑中的语言理解机制。

相似文献

4
Neural Encoding and Decoding With Distributed Sentence Representations.分布式句子表示的神经编码和解码。
IEEE Trans Neural Netw Learn Syst. 2021 Feb;32(2):589-603. doi: 10.1109/TNNLS.2020.3027595. Epub 2021 Feb 4.
6
Probabilistic language models in cognitive neuroscience: Promises and pitfalls.认知神经科学中的概率语言模型:前景与挑战。
Neurosci Biobehav Rev. 2017 Dec;83:579-588. doi: 10.1016/j.neubiorev.2017.09.001. Epub 2017 Sep 5.

引用本文的文献

5
Semantic composition in experimental and naturalistic paradigms.实验范式和自然主义范式中的语义合成
Imaging Neurosci (Camb). 2024 Jan 22;2. doi: 10.1162/imag_a_00072. eCollection 2024.
7
Recurrent neural networks as neuro-computational models of human speech recognition.作为人类语音识别神经计算模型的循环神经网络。
PLoS Comput Biol. 2025 Jul 28;21(7):e1013244. doi: 10.1371/journal.pcbi.1013244. eCollection 2025 Jul.

本文引用的文献

1
Composition is the Core Driver of the Language-selective Network.成分是语言选择网络的核心驱动因素。
Neurobiol Lang (Camb). 2020 Mar 1;1(1):104-134. doi: 10.1162/nol_a_00005. eCollection 2020.
6
A map of object space in primate inferotemporal cortex.灵长类动物下颞叶皮层的客体空间图谱。
Nature. 2020 Jul;583(7814):103-108. doi: 10.1038/s41586-020-2350-5. Epub 2020 Jun 3.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验