Suppr超能文献

关于情境概率与N400波幅之间的数学关系。

On the Mathematical Relationship Between Contextual Probability and N400 Amplitude.

作者信息

Michaelov James A, Bergen Benjamin K

机构信息

Department of Cognitive Science, University of California San Diego.

出版信息

Open Mind (Camb). 2024 Jun 28;8:859-897. doi: 10.1162/opmi_a_00150. eCollection 2024.

Abstract

Accounts of human language comprehension propose different mathematical relationships between the contextual probability of a word and how difficult it is to process, including linear, logarithmic, and super-logarithmic ones. However, the empirical evidence favoring any of these over the others is mixed, appearing to vary depending on the index of processing difficulty used and the approach taken to calculate contextual probability. To help disentangle these results, we focus on the mathematical relationship between corpus-derived contextual probability and the N400, a neural index of processing difficulty. Specifically, we use 37 contemporary transformer language models to calculate the contextual probability of stimuli from 6 experimental studies of the N400, and test whether N400 amplitude is best predicted by a linear, logarithmic, super-logarithmic, or sub-logarithmic transformation of the probabilities calculated using these language models, as well as combinations of these transformed metrics. We replicate the finding that on some datasets, a combination of linearly and logarithmically-transformed probability can predict N400 amplitude better than either metric alone. In addition, we find that overall, the best single predictor of N400 amplitude is sub-logarithmically-transformed probability, which for almost all language models and datasets explains all the variance in N400 amplitude otherwise explained by the linear and logarithmic transformations. This is a novel finding that is not predicted by any current theoretical accounts, and thus one that we argue is likely to play an important role in increasing our understanding of how the statistical regularities of language impact language comprehension.

摘要

关于人类语言理解的描述提出了单词的上下文概率与处理难度之间不同的数学关系,包括线性、对数和超对数关系。然而,支持其中任何一种关系优于其他关系的实证证据并不一致,似乎会因所使用的处理难度指标和计算上下文概率的方法而异。为了帮助理清这些结果,我们关注源自语料库的上下文概率与N400(一种处理难度的神经指标)之间的数学关系。具体而言,我们使用37个当代变压器语言模型来计算来自6项关于N400的实验研究的刺激的上下文概率,并测试N400波幅是否能通过使用这些语言模型计算出的概率的线性、对数、超对数或次对数变换,以及这些变换后的指标的组合得到最佳预测。我们重复了这样一个发现,即在某些数据集上,线性变换概率和对数变换概率的组合比单独的任何一个指标都能更好地预测N400波幅。此外,我们发现总体而言,N400波幅的最佳单一预测指标是次对数变换概率,对于几乎所有语言模型和数据集,它解释了N400波幅中原本由线性变换和对数变换所解释的所有方差。这是一个新颖的发现,目前没有任何理论描述能够预测到,因此我们认为这一发现可能会在增进我们对语言的统计规律如何影响语言理解的认识方面发挥重要作用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d795/11285424/e8b9d9556d6b/opmi-08-859-g001.jpg

相似文献

1
On the Mathematical Relationship Between Contextual Probability and N400 Amplitude.
Open Mind (Camb). 2024 Jun 28;8:859-897. doi: 10.1162/opmi_a_00150. eCollection 2024.
2
Strong Prediction: Language Model Surprisal Explains Multiple N400 Effects.
Neurobiol Lang (Camb). 2024 Apr 1;5(1):107-135. doi: 10.1162/nol_a_00105. eCollection 2024.
4
Tracking Lexical and Semantic Prediction Error Underlying the N400 Using Artificial Neural Network Models of Sentence Processing.
Neurobiol Lang (Camb). 2024 Apr 1;5(1):136-166. doi: 10.1162/nol_a_00134. eCollection 2024.
5
Neural evidence for Bayesian trial-by-trial adaptation on the N400 during semantic priming.
Cognition. 2019 Jun;187:10-20. doi: 10.1016/j.cognition.2019.01.001. Epub 2019 Feb 20.
6
Ignoring the alternatives: The N400 is sensitive to stimulus preactivation alone.
Cortex. 2023 Nov;168:82-101. doi: 10.1016/j.cortex.2023.08.001. Epub 2023 Aug 14.
7
Neurobehavioral Correlates of Surprisal in Language Comprehension: A Neurocomputational Model.
Front Psychol. 2021 Feb 11;12:615538. doi: 10.3389/fpsyg.2021.615538. eCollection 2021.
9
A Neurocomputational Model of the N400 and the P600 in Language Processing.
Cogn Sci. 2017 May;41 Suppl 6(Suppl Suppl 6):1318-1352. doi: 10.1111/cogs.12461. Epub 2016 Dec 21.
10
Large-scale evidence for logarithmic effects of word predictability on reading time.
Proc Natl Acad Sci U S A. 2024 Mar 5;121(10):e2307876121. doi: 10.1073/pnas.2307876121. Epub 2024 Feb 29.

引用本文的文献

本文引用的文献

1
Strong Prediction: Language Model Surprisal Explains Multiple N400 Effects.
Neurobiol Lang (Camb). 2024 Apr 1;5(1):107-135. doi: 10.1162/nol_a_00105. eCollection 2024.
2
Large-scale evidence for logarithmic effects of word predictability on reading time.
Proc Natl Acad Sci U S A. 2024 Mar 5;121(10):e2307876121. doi: 10.1073/pnas.2307876121. Epub 2024 Feb 29.
3
The Plausibility of Sampling as an Algorithmic Theory of Sentence Processing.
Open Mind (Camb). 2023 Jul 21;7:350-391. doi: 10.1162/opmi_a_00086. eCollection 2023.
5
Comprehending surprising sentences: sensitivity of post-N400 positivities to contextual congruity and semantic relatedness.
Lang Cogn Neurosci. 2020;35(8):1044-1063. doi: 10.1080/23273798.2019.1708960. Epub 2020 Jan 6.
6
Comparison of Structural Parsers and Neural Language Models as Surprisal Estimators.
Front Artif Intell. 2022 Mar 3;5:777963. doi: 10.3389/frai.2022.777963. eCollection 2022.
7
The power of "good": Can adjectives rapidly decrease as well as increase the availability of the upcoming noun?
J Exp Psychol Learn Mem Cogn. 2022 Jun;48(6):856-875. doi: 10.1037/xlm0001091. Epub 2021 Oct 28.
8
Retrieval (N400) and integration (P600) in expectation-based comprehension.
PLoS One. 2021 Sep 28;16(9):e0257430. doi: 10.1371/journal.pone.0257430. eCollection 2021.
9
Connecting and considering: Electrophysiology provides insights into comprehension.
Psychophysiology. 2022 Jan;59(1):e13940. doi: 10.1111/psyp.13940. Epub 2021 Sep 14.
10
Neurobehavioral Correlates of Surprisal in Language Comprehension: A Neurocomputational Model.
Front Psychol. 2021 Feb 11;12:615538. doi: 10.3389/fpsyg.2021.615538. eCollection 2021.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验