Suppr超能文献

在音素恢复过程中我们会看向脸部的哪个部位:一项眼动追踪研究。

Where on the face do we look during phonemic restoration: An eye-tracking study.

作者信息

Baron Alisa, Harwood Vanessa, Kleinman Daniel, Campanelli Luca, Molski Joseph, Landi Nicole, Irwin Julia

机构信息

Department of Communicative Disorders, University of Rhode Island, Kingston, RI, United States.

Haskins Laboratories, New Haven, CT, United States.

出版信息

Front Psychol. 2023 May 25;14:1005186. doi: 10.3389/fpsyg.2023.1005186. eCollection 2023.

Abstract

Face to face communication typically involves audio and visual components to the speech signal. To examine the effect of task demands on gaze patterns in response to a speaking face, adults participated in two eye-tracking experiments with an audiovisual (articulatory information from the mouth was visible) and a pixelated condition (articulatory information was not visible). Further, task demands were manipulated by having listeners respond in a passive (no response) or an active (button press response) context. The active experiment required participants to discriminate between speech stimuli and was designed to mimic environmental situations which require one to use visual information to disambiguate the speaker's message, simulating different listening conditions in real-world settings. Stimuli included a clear exemplar of the syllable /ba/ and a second exemplar in which the formant initial consonant was reduced creating an //-like consonant. Consistent with our hypothesis, results revealed that the greatest fixations to the mouth were present in the audiovisual active experiment and visual articulatory information led to a phonemic restoration effect for the // speech token. In the pixelated condition, participants fixated on the eyes, and discrimination of the deviant token within the active experiment was significantly greater than the audiovisual condition. These results suggest that when required to disambiguate changes in speech, adults may look to the mouth for additional cues to support processing when it is available.

摘要

面对面交流通常涉及语音信号的听觉和视觉成分。为了研究任务需求对注视模式的影响,即面对说话的面孔时的注视模式,成年人参与了两项眼动追踪实验,一项是视听条件(能看到来自嘴巴的发音信息),另一项是像素化条件(看不到发音信息)。此外,通过让听众在被动(无反应)或主动(按键反应)情境下做出反应来操纵任务需求。主动实验要求参与者区分语音刺激,旨在模拟需要人们利用视觉信息来消除说话者信息歧义的环境情况,模拟现实世界中的不同聆听条件。刺激包括音节/ba/的清晰范例以及第二个范例,其中共振峰初始辅音被弱化,产生类似//的辅音。与我们的假设一致,结果显示,在视听主动实验中对嘴巴的注视最多,并且视觉发音信息对//语音标记产生了音素恢复效应。在像素化条件下,参与者注视眼睛,并且在主动实验中对异常标记的辨别在像素化条件下显著大于视听条件。这些结果表明,当需要消除语音变化的歧义时,如果有可用信息,成年人可能会看向嘴巴以获取额外线索来支持处理。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e66b/10249372/21946efb8b4a/fpsyg-14-1005186-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验