Department of Psychology, University of Potsdam, Potsdam, Germany.
Eur J Neurosci. 2021 Nov;54(9):7125-7140. doi: 10.1111/ejn.15462. Epub 2021 Oct 4.
The functional significance of the N400 evoked-response component is still actively debated. An increasing amount of theoretical and computational modelling work is built on the interpretation of the N400 as a prediction error. In neural network modelling work, it was proposed that the N400 component can be interpreted as the change in a probabilistic representation of meaning that drives the continuous adaptation of an internal model of the statistics of the environment. These results imply that increased N400 amplitudes should correspond to greater adaptation, which can be measured via implicit memory. To investigate this model derived hypothesis, the current study manipulated expectancy in a sentence reading task to influence N400 amplitudes and subsequently presented the previously expected vs. unexpected words in a perceptual identification task to measure implicit memory. As predicted, reaction times in the perceptual identification task were significantly faster for previously unexpected words that induced larger N400 amplitudes in the previous sentence reading task. Additionally, it could be demonstrated that this adaptation seems to specifically depend on the process underlying N400 amplitudes, as participants with larger N400 differences during sentence reading also exhibited a larger implicit memory benefit in the perceptual identification task. These findings support the interpretation of the N400 as an implicit learning signal driving adaptation in language processing.
N400 诱发电位成分的功能意义仍在激烈争论中。越来越多的理论和计算模型工作建立在将 N400 解释为预测误差的基础上。在神经网络建模工作中,有人提出 N400 成分可以被解释为意义的概率表示的变化,这种变化驱动着对环境统计的内部模型的持续适应。这些结果表明,N400 幅度的增加应该对应于更大的适应,这可以通过内隐记忆来测量。为了验证这个由模型推导出来的假设,本研究在句子阅读任务中操纵了预期,以影响 N400 幅度,然后在知觉识别任务中呈现之前预期的和意外的单词,以测量内隐记忆。正如预测的那样,在知觉识别任务中,对于在前一个句子阅读任务中产生较大 N400 幅度的先前意外的单词,反应时间明显更快。此外,还可以证明这种适应似乎特别取决于 N400 幅度的潜在过程,因为在句子阅读过程中 N400 差异较大的参与者在知觉识别任务中也表现出更大的内隐记忆益处。这些发现支持将 N400 解释为驱动语言处理适应的内隐学习信号。