• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度人工神经网络揭示命题句级意义的分布式皮层网络编码。

Deep Artificial Neural Networks Reveal a Distributed Cortical Network Encoding Propositional Sentence-Level Meaning.

机构信息

Department of Neuroscience, University of Rochester, Rochester, New York 14642

Del Monte Institute for Neuroscience, University of Rochester, Rochester, New York 14642.

出版信息

J Neurosci. 2021 May 5;41(18):4100-4119. doi: 10.1523/JNEUROSCI.1152-20.2021. Epub 2021 Mar 22.

DOI:10.1523/JNEUROSCI.1152-20.2021
PMID:33753548
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8176751/
Abstract

Understanding how and where in the brain sentence-level meaning is constructed from words presents a major scientific challenge. Recent advances have begun to explain brain activation elicited by sentences using vector models of word meaning derived from patterns of word co-occurrence in text corpora. These studies have helped map out semantic representation across a distributed brain network spanning temporal, parietal, and frontal cortex. However, it remains unclear whether activation patterns within regions reflect unified representations of sentence-level meaning, as opposed to superpositions of context-independent component words. This is because models have typically represented sentences as "bags-of-words" that neglect sentence-level structure. To address this issue, we interrogated fMRI activation elicited as 240 sentences were read by 14 participants (9 female, 5 male), using sentences encoded by a recurrent deep artificial neural-network trained on a sentence inference task (InferSent). Recurrent connections and nonlinear filters enable InferSent to transform sequences of word vectors into unified "propositional" sentence representations suitable for evaluating intersentence entailment relations. Using voxelwise encoding modeling, we demonstrate that InferSent predicts elements of fMRI activation that cannot be predicted by bag-of-words models and sentence models using grammatical rules to assemble word vectors. This effect occurs throughout a distributed network, which suggests that propositional sentence-level meaning is represented within and across multiple cortical regions rather than at any single site. In follow-up analyses, we place results in the context of other deep network approaches (ELMo and BERT) and estimate the degree of unpredicted neural signal using an "experiential" semantic model and cross-participant encoding. A modern-day scientific challenge is to understand how the human brain transforms word sequences into representations of sentence meaning. A recent approach, emerging from advances in functional neuroimaging, big data, and machine learning, is to computationally model meaning, and use models to predict brain activity. Such models have helped map a cortical semantic information-processing network. However, how unified sentence-level information, as opposed to word-level units, is represented throughout this network remains unclear. This is because models have typically represented sentences as unordered "bags-of-words." Using a deep artificial neural network that recurrently and nonlinearly combines word representations into unified propositional sentence representations, we provide evidence that sentence-level information is encoded throughout a cortical network, rather than in a single region.

摘要

理解大脑如何以及在何处构建句子级别的意义是一个重大的科学挑战。最近的进展开始使用从文本语料库中的单词共现模式得出的单词意义向量模型来解释句子引发的大脑激活。这些研究有助于在跨越颞叶、顶叶和额叶皮层的分布式大脑网络中描绘语义表示。然而,目前尚不清楚区域内的激活模式是否反映了句子级意义的统一表示,而不是独立于上下文的组成词的叠加。这是因为模型通常将句子表示为“单词袋”,从而忽略了句子级结构。为了解决这个问题,我们使用 14 名参与者(9 名女性,5 名男性)阅读的 240 个句子,对 fMRI 激活进行了询问,这些句子是由经过句子推理任务(InferSent)训练的递归深度人工神经网络编码的。递归连接和非线性滤波器使 InferSent 能够将单词向量序列转换为适合评估句子蕴涵关系的统一“命题”句子表示。使用体素编码建模,我们证明 InferSent 可以预测单词袋模型和使用语法规则组装单词向量的句子模型无法预测的 fMRI 激活元素。这种效应发生在整个分布式网络中,这表明命题句子级别的意义是在多个皮质区域内和跨区域表示的,而不是在任何单个位置。在后续分析中,我们将结果置于其他深度网络方法(ELMo 和 BERT)的背景下,并使用“经验”语义模型和跨参与者编码来估计未预测的神经信号的程度。现代科学挑战是理解人类大脑如何将单词序列转换为句子意义的表示。一种新出现的方法,源自功能神经影像学、大数据和机器学习方面的进展,是对意义进行计算建模,并使用模型来预测大脑活动。这种模型帮助绘制了皮质语义信息处理网络。然而,整个网络中如何表示统一的句子级信息,而不是词级单元,仍然不清楚。这是因为模型通常将句子表示为无序的“单词袋”。我们使用深度人工神经网络,该神经网络递归且非线性地将单词表示组合成统一的命题句子表示,提供了证据表明句子级信息是在皮质网络中编码的,而不是在单个区域中编码的。

相似文献

1
Deep Artificial Neural Networks Reveal a Distributed Cortical Network Encoding Propositional Sentence-Level Meaning.深度人工神经网络揭示命题句级意义的分布式皮层网络编码。
J Neurosci. 2021 May 5;41(18):4100-4119. doi: 10.1523/JNEUROSCI.1152-20.2021. Epub 2021 Mar 22.
2
An Integrated Neural Decoder of Linguistic and Experiential Meaning.语言和体验意义的综合神经解码器。
J Neurosci. 2019 Nov 6;39(45):8969-8987. doi: 10.1523/JNEUROSCI.2575-18.2019. Epub 2019 Sep 30.
3
Multiple Regions of a Cortical Network Commonly Encode the Meaning of Words in Multiple Grammatical Positions of Read Sentences.皮质网络的多个区域通常对阅读句子中多个语法位置的单词的含义进行编码。
Cereb Cortex. 2019 Jun 1;29(6):2396-2411. doi: 10.1093/cercor/bhy110.
4
Neural Encoding and Decoding With Distributed Sentence Representations.分布式句子表示的神经编码和解码。
IEEE Trans Neural Netw Learn Syst. 2021 Feb;32(2):589-603. doi: 10.1109/TNNLS.2020.3027595. Epub 2021 Feb 4.
5
Deep neural networks reveal topic-level representations of sentences in medial prefrontal cortex, lateral anterior temporal lobe, precuneus, and angular gyrus.深度神经网络揭示了内侧前额叶皮质、外侧前颞叶、楔前叶和角回中句子的主题层次表示。
Neuroimage. 2022 May 1;251:119005. doi: 10.1016/j.neuroimage.2022.119005. Epub 2022 Feb 14.
6
The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality.在听和读两种模式下,人类大脑皮层对语义信息的表示在刺激方式上是不变的。
J Neurosci. 2019 Sep 25;39(39):7722-7736. doi: 10.1523/JNEUROSCI.0675-19.2019. Epub 2019 Aug 19.
7
Semantic ambiguity processing in sentence context: Evidence from event-related fMRI.句子语境中的语义歧义处理:来自事件相关功能磁共振成像的证据。
Neuroimage. 2007 Feb 1;34(3):1270-9. doi: 10.1016/j.neuroimage.2006.09.048. Epub 2006 Dec 4.
8
Semantic Representations during Language Comprehension Are Affected by Context.语言理解过程中的语义表示受上下文影响。
J Neurosci. 2023 Apr 26;43(17):3144-3158. doi: 10.1523/JNEUROSCI.2459-21.2023. Epub 2023 Mar 27.
9
A Distributed Network for Multimodal Experiential Representation of Concepts.一种用于概念的多模式体验表示的分布式网络。
J Neurosci. 2022 Sep 14;42(37):7121-7130. doi: 10.1523/JNEUROSCI.1243-21.2022. Epub 2022 Aug 8.
10
Dimensionality and Ramping: Signatures of Sentence Integration in the Dynamics of Brains and Deep Language Models.维度和渐变:大脑和深度语言模型动态中句子整合的特征。
J Neurosci. 2023 Jul 19;43(29):5350-5364. doi: 10.1523/JNEUROSCI.1163-22.2023. Epub 2023 May 22.

引用本文的文献

1
Disorganisation and depression: a re-examination of how we think and speak when depressed.混乱与抑郁:对抑郁时思维及言语方式的重新审视
Eur Arch Psychiatry Clin Neurosci. 2025 Apr 2. doi: 10.1007/s00406-025-01994-1.
2
Generative language reconstruction from brain recordings.基于脑电记录的生成式语言重建
Commun Biol. 2025 Mar 1;8(1):346. doi: 10.1038/s42003-025-07731-7.
3
Experientially-grounded and distributional semantic vectors uncover dissociable representations of conceptual categories.基于经验和分布语义的向量揭示了概念类别的可分离表征。
Lang Cogn Neurosci. 2023 Jul 12;39(8):1020-1044. doi: 10.1080/23273798.2023.2232481. eCollection 2024.
4
Neural encoding of semantic structures during sentence production.句子生成过程中语义结构的神经编码。
Cereb Cortex. 2024 Dec 3;34(12). doi: 10.1093/cercor/bhae482.
5
Deep-learning models reveal how context and listener attention shape electrophysiological correlates of speech-to-language transformation.深度学习模型揭示了语境和听众注意力如何塑造言语到语言转换的电生理相关性。
PLoS Comput Biol. 2024 Nov 11;20(11):e1012537. doi: 10.1371/journal.pcbi.1012537. eCollection 2024 Nov.
6
Temporal integration in human auditory cortex is predominantly yoked to absolute time, not structure duration.人类听觉皮层中的时间整合主要与绝对时间相关,而非结构时长。
bioRxiv. 2024 Sep 24:2024.09.23.614358. doi: 10.1101/2024.09.23.614358.
7
Neural populations in the language network differ in the size of their temporal receptive windows.语言网络中的神经群体在时间感受窗的大小上存在差异。
Nat Hum Behav. 2024 Oct;8(10):1924-1942. doi: 10.1038/s41562-024-01944-2. Epub 2024 Aug 26.
8
Perplexity of utterances in untreated first-episode psychosis: an ultra-high field MRI dynamic causal modelling study of the semantic network.未经治疗的首发精神病患者言语的困惑:语义网络的超高场 MRI 动态因果建模研究。
J Psychiatry Neurosci. 2024 Aug 9;49(4):E252-E262. doi: 10.1503/jpn.240031. Print 2024 Jul-Aug.
9
Predicting the next sentence (not word) in large language models: What model-brain alignment tells us about discourse comprehension.预测大型语言模型中的下一句话(不是下一个词):模型-大脑对齐告诉我们关于语篇理解的什么。
Sci Adv. 2024 May 24;10(21):eadn7744. doi: 10.1126/sciadv.adn7744. Epub 2024 May 23.
10
Computational Language Modeling and the Promise of In Silico Experimentation.计算语言建模与计算机模拟实验的前景。
Neurobiol Lang (Camb). 2024 Apr 1;5(1):80-106. doi: 10.1162/nol_a_00101. eCollection 2024.

本文引用的文献

1
A hierarchy of linguistic predictions during natural language comprehension.自然语言理解过程中的语言预测层次。
Proc Natl Acad Sci U S A. 2022 Aug 9;119(32):e2201968119. doi: 10.1073/pnas.2201968119. Epub 2022 Aug 3.
2
Decoding individual identity from brain activity elicited in imagining common experiences.从想象常见经历中引发的大脑活动中解码个体身份。
Nat Commun. 2020 Nov 20;11(1):5916. doi: 10.1038/s41467-020-19630-y.
3
Lack of selectivity for syntax relative to word meanings throughout the language network.在整个语言网络中,相对于词义而言,对句法缺乏选择性。
Cognition. 2020 Oct;203:104348. doi: 10.1016/j.cognition.2020.104348. Epub 2020 Jun 20.
4
Constructing and Forgetting Temporal Context in the Human Cerebral Cortex.在人类大脑皮层中构建和遗忘时间上下文。
Neuron. 2020 May 20;106(4):675-686.e11. doi: 10.1016/j.neuron.2020.02.013. Epub 2020 Mar 11.
5
Neural dynamics of semantic composition.语义组合的神经动力学。
Proc Natl Acad Sci U S A. 2019 Oct 15;116(42):21318-21327. doi: 10.1073/pnas.1903402116. Epub 2019 Sep 30.
6
An Integrated Neural Decoder of Linguistic and Experiential Meaning.语言和体验意义的综合神经解码器。
J Neurosci. 2019 Nov 6;39(45):8969-8987. doi: 10.1523/JNEUROSCI.2575-18.2019. Epub 2019 Sep 30.
7
The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality.在听和读两种模式下,人类大脑皮层对语义信息的表示在刺激方式上是不变的。
J Neurosci. 2019 Sep 25;39(39):7722-7736. doi: 10.1523/JNEUROSCI.0675-19.2019. Epub 2019 Aug 19.
8
The lexical semantics of adjective-noun phrases in the human brain.形容词-名词短语的词汇语义在人类大脑中的体现。
Hum Brain Mapp. 2019 Oct 15;40(15):4457-4469. doi: 10.1002/hbm.24714. Epub 2019 Jul 16.
9
Multiple Regions of a Cortical Network Commonly Encode the Meaning of Words in Multiple Grammatical Positions of Read Sentences.皮质网络的多个区域通常对阅读句子中多个语法位置的单词的含义进行编码。
Cereb Cortex. 2019 Jun 1;29(6):2396-2411. doi: 10.1093/cercor/bhy110.
10
Toward a universal decoder of linguistic meaning from brain activation.迈向基于大脑激活的语言意义通用解码器。
Nat Commun. 2018 Mar 6;9(1):963. doi: 10.1038/s41467-018-03068-4.