Suppr超能文献

听觉正常成年人和聋人如何受 cue 语调节麦格克效应?

How is the McGurk effect modulated by Cued Speech in deaf and hearing adults?

机构信息

Center for Research in Cognition and Neurosciences, Université Libre de Bruxelles Brussels, Belgium.

出版信息

Front Psychol. 2014 May 19;5:416. doi: 10.3389/fpsyg.2014.00416. eCollection 2014.

Abstract

Speech perception for both hearing and deaf people involves an integrative process between auditory and lip-reading information. In order to disambiguate information from lips, manual cues from Cued Speech may be added. Cued Speech (CS) is a system of manual aids developed to help deaf people to clearly and completely understand speech visually (Cornett, 1967). Within this system, both labial and manual information, as lone input sources, remain ambiguous. Perceivers, therefore, have to combine both types of information in order to get one coherent percept. In this study, we examined how audio-visual (AV) integration is affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely. To address this issue, we designed a unique experiment that implemented the use of AV McGurk stimuli (audio /pa/ and lip-reading /ka/) which were produced with or without manual cues. The manual cue was congruent with either auditory information, lip information or the expected fusion. Participants were asked to repeat the perceived syllable aloud. Their responses were then classified into four categories: audio (when the response was /pa/), lip-reading (when the response was /ka/), fusion (when the response was /ta/) and other (when the response was something other than /pa/, /ka/ or /ta/). Data were collected from hearing impaired individuals who were experts in CS (all of which had either cochlear implants or binaural hearing aids; N = 8), hearing-individuals who were experts in CS (N = 14) and hearing-individuals who were completely naïve of CS (N = 15). Results confirmed that, like hearing-people, deaf people can merge auditory and lip-reading information into a single unified percept. Without manual cues, McGurk stimuli induced the same percentage of fusion responses in both groups. Results also suggest that manual cues can modify the AV integration and that their impact differs between hearing and deaf people.

摘要

无论是听力正常的人还是失聪的人,他们的言语感知都涉及到听觉和唇读信息之间的综合处理过程。为了从嘴唇信息中消除歧义,可以添加 Cued Speech(CS)的手动提示。Cued Speech(CS)是一种手动辅助系统,旨在帮助失聪人士通过视觉清晰、完整地理解言语(Cornett,1967)。在这个系统中,唇读和手动信息作为单独的输入源仍然存在歧义。因此,感知者必须将这两种类型的信息结合起来,才能获得一个连贯的感知。在这项研究中,我们研究了听觉-视觉(AV)整合是如何受到手动提示的影响,以及 CS 接收者主要依赖哪种形式的信息(听觉、唇读或手动)。为了解决这个问题,我们设计了一个独特的实验,该实验使用了 AV McGurk 刺激(音频/pa/和唇读/ka/),这些刺激在有无手动提示的情况下产生。手动提示与听觉信息、唇读信息或预期融合一致。参与者被要求大声重复感知到的音节。然后将他们的反应分为四类:音频(当反应为/pa/时)、唇读(当反应为/ka/时)、融合(当反应为/ta/时)和其他(当反应为/pa/、/ka/或/ta/以外的其他音时)。数据来自精通 CS 的听力受损个体(均有耳蜗植入或双耳助听器;N=8)、精通 CS 的听力个体(N=14)和完全不熟悉 CS 的听力个体(N=15)。结果证实,与听力正常的人一样,失聪的人可以将听觉和唇读信息融合成一个单一的统一感知。没有手动提示时,两组人都有相同比例的 McGurk 刺激引起的融合反应。结果还表明,手动提示可以改变视听整合,并且它们对听力和失聪个体的影响不同。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验