Interdisciplinary Graduate Program in Neuroscience, 356 Medical Research Center, University of Iowa, Iowa City, IA, 52242, United States.
Department of Psychological & Brain Sciences, W311 Seashore Hall, University of Iowa, Iowa City, IA, 52242, United States.
Brain Lang. 2020 Dec;211:104875. doi: 10.1016/j.bandl.2020.104875. Epub 2020 Oct 18.
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e.g., a bees/peas continuum) and sentential expectations (e.g., Honey is made by bees). EEG was analyzed with a mixed effects model over time to quantify how language processing cascades proceed on a millisecond-by-millisecond basis. Our results indicate: (1) perceptual processing and memory for fine-grained acoustics is preserved in brain activity for up to 900 msec; (2) contextual analysis begins early and is graded with respect to the acoustic signal; and (3) top-down predictions influence perceptual processing in some cases, however, these predictions are available simultaneously with the veridical signal. These mechanistic insights provide a basis for a better understanding of the cortical language network.
声学、音韵和语义。然而,目前还没有一个全面的画面,说明这些层次是如何相关的。我们在听众(N=31)听到句子时记录了脑电图(EEG),我们在这些句子中操纵了声学歧义(例如,蜜蜂/豌豆连续体)和句子预期(例如,蜂蜜是由蜜蜂制成的)。我们使用混合效应模型随着时间的推移进行 EEG 分析,以量化语言处理在毫秒级的基础上是如何进行的。我们的研究结果表明:(1)在大脑活动中,对细微声音的感知处理和记忆可以保留长达 900 毫秒;(2)语境分析很早就开始了,并且与声学信号有关;(3)自上而下的预测在某些情况下会影响感知处理,但这些预测与真实信号同时存在。这些机制上的见解为更好地理解皮质语言网络提供了基础。