Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, Massachusetts 02129,
Department of Psychology, Tufts University, Medford, Massachusetts 02155.
J Neurosci. 2020 Apr 15;40(16):3278-3291. doi: 10.1523/JNEUROSCI.1733-19.2020. Epub 2020 Mar 11.
It has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used magnetoencephalography (MEG) and electroencephalography (EEG), in combination with representational similarity analysis, to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate-constraining verbs was greater than following inanimate-constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words. Language inputs unfold very quickly during real-time communication. By predicting ahead, we can give our brains a "head start," so that language comprehension is faster and more efficient. Although most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context of "they cautioned the…," we can predict that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here, we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.
有人提出,在语言理解过程中,人们可以在多个表示水平上生成概率预测。我们使用脑磁图(MEG)和脑电图(EEG),结合表示相似性分析,寻找预测生物特征的神经证据。在两项研究中,我们测量了人类参与者(男女)阅读三句情景句子时的 MEG 和 EEG 活动。最后一句中的动词限制了即将出现的名词的生物或非生物语义特征,而更广泛的语篇语境则限制了特定的名词或属于同一生物类别名词的多个名词。我们量化了动词之后空间模式的脑活动之间的相似性,直到名词呈现之前。MEG 和 EEG 数据集提供了一致的证据,表明受生物限制动词之后的神经活动空间模式之间的相似性大于受非生物限制动词之后的相似性。这种效应不能用动词本身的词汇语义处理来解释。因此,我们认为这反映了预测的生物和非生物名词之间语义相似性结构的固有差异。此外,无论是否可以预测特定的单词,这种效应都存在,这为超越单个单词预测的粗粒度语义特征预测提供了有力的证据。在实时交流中,语言输入非常快。通过提前预测,我们可以让大脑“领先一步”,从而使语言理解更快、更高效。尽管大多数语境不能强烈限制特定的单词,但它们确实允许我们预测一些即将出现的信息。例如,在“他们警告说……”的语境下,我们可以预测下一个词将是生物的而不是非生物的(我们可以警告一个人,但不能警告一个物体)。在这里,我们使用 EEG 和 MEG 技术表明,大脑能够利用这些上下文约束来预测句子理解过程中即将出现的单词的生物性,并且这些预测与特定的神经活动空间模式相关。