Suppr超能文献

关于时间、内容和方式的预测对听觉言语感知的影响。

The impact of when, what and how predictions on auditory speech perception.

机构信息

Laboratoire Parole et Langage, UMR 7309, CNRS, LPL, Aix Marseille Université, 5 avenue Pasteur, 13100, Aix-en-Provence, France.

Département de Réadaptation, Faculté de Médecine, Université Laval, Quebec City, Canada.

出版信息

Exp Brain Res. 2019 Dec;237(12):3143-3153. doi: 10.1007/s00221-019-05661-5. Epub 2019 Oct 1.

Abstract

An impressive number of theoretical proposals and neurobiological studies argue that perceptual processing is not strictly feedforward but rather operates through an interplay between bottom-up sensory and top-down predictive mechanisms. The present EEG study aimed to further determine how prior knowledge on auditory syllables may impact speech perception. Prior knowledge was manipulated by presenting the participants with visual information indicative of the syllable onset (when), its phonetic content (what) and/or its articulatory features (how). While when and what predictions consisted of unnatural visual cues (i.e., a visual timeline and a visuo-orthographic cue), how prediction consisted of the visual movements of a speaker. During auditory speech perception, when and what predictions both attenuated the amplitude of N1/P2 auditory evoked potentials. Regarding how prediction, not only an amplitude decrease but also a latency facilitation of N1/P2 auditory evoked potentials were observed during audiovisual compared to unimodal speech perception. However, when and what predictability effects were then reduced or abolished, with only what prediction reducing P2 amplitude but increasing latency. Altogether, these results demonstrate the influence of when, what and how visually induced predictions at an early stage on cortical auditory speech processing. Crucially, they indicate a preponderant predictive role of the speaker's articulatory gestures during audiovisual speech perception, likely driven by attentional load and focus.

摘要

大量的理论假设和神经生物学研究表明,感知处理并非严格的前馈过程,而是通过自下而上的感觉和自上而下的预测机制之间的相互作用来进行的。本 EEG 研究旨在进一步确定先前关于听觉音节的知识如何影响语音感知。通过向参与者呈现指示音节起始(何时)、其语音内容(什么)和/或发音特征(如何)的视觉信息来操纵先前的知识。当和什么预测由不自然的视觉线索(即视觉时间线和视觉-正字法线索)组成,而如何预测由说话者的视觉运动组成。在听觉言语感知过程中,当和什么预测都会减弱 N1/P2 听觉诱发电位的振幅。而对于如何预测,与仅进行听觉感知相比,视听感知不仅会观察到 N1/P2 听觉诱发电位的振幅降低,而且潜伏期也会得到促进。然而,当和什么的可预测性效应随后减少或消失,只有当预测会降低 P2 振幅但会增加潜伏期。总之,这些结果表明在早期阶段,视觉诱导的预测在皮质听觉言语处理中具有影响。至关重要的是,它们表明在视听言语感知中,说话者的发音手势具有主导的预测作用,这可能是由注意力负荷和焦点驱动的。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验