Zekveld Adriana A, Kramer Sophia E, Vlaming Marcel S M G, Houtgast Tammo
EMGO Institute, ENT/Audiology, VU University Medical Center, Amsterdam, The Netherlands.
Ear Hear. 2008 Jan;29(1):99-111. doi: 10.1097/AUD.0b013e31815d6d8d.
The aim of this study was to examine the support obtained from degraded visual information in the comprehension of speech in noise.
We presented sentences auditorily (speech reception threshold test), visually (text reception threshold test), and audiovisually. Presenting speech in noise and masked written text enabled the quantification and systematic variation of the amount of information presented in both modalities. Eighteen persons with normal hearing (aged 19 to 31 yr) participated. For half of them a bar pattern masked the text and for the other half random dots masked the text. The text was presented simultaneously or delayed relative to the speech. Using an adaptive procedure, the amount of information required for a correct reproduction of 50% of the sentences was determined for both the unimodal and the audiovisual stimuli. Bimodal support was defined as the difference between the observed bimodal performance and that predicted by an independent channels model. Nonparametric tests were used to evaluate the bimodal support and the effect of delaying the text.
Masked text substantially supported the comprehension of speech in noise; the bimodal support ranged from 15% to 25% correct. A negative effect of delaying the text was observed in some conditions for the participants who were presented the text masked by the bar pattern.
The ability of participants to reproduce bimodally presented sentences exceeds the performance as predicted by an independent channels model. This indicates that a relatively small amount of visual information can substantially augment speech comprehension in noise, which supports the use of visual information to improve speech comprehension by participants with hearing impairment, even if the visual information is incomplete.
本研究旨在考察在噪声环境下言语理解中从退化视觉信息获得的支持。
我们通过听觉(言语接受阈值测试)、视觉(文本接受阈值测试)以及视听结合的方式呈现句子。在噪声中呈现言语以及对书面文本进行掩蔽,能够对两种模态中呈现的信息量进行量化和系统变化。18名听力正常的人(年龄在19至31岁之间)参与了研究。其中一半人的文本被条形图案掩蔽,另一半人的文本被随机点掩蔽。文本与言语同时呈现或延迟呈现。使用自适应程序,确定单模态和视听刺激正确再现50%句子所需的信息量。双模态支持被定义为观察到的双模态表现与独立通道模型预测的表现之间的差异。使用非参数检验来评估双模态支持以及延迟文本的影响。
掩蔽文本显著支持了噪声环境下的言语理解;双模态支持的正确率范围为15%至25%。在某些条件下,对于文本被条形图案掩蔽的参与者,观察到了延迟文本的负面影响。
参与者双模态呈现句子的再现能力超过了独立通道模型预测的表现。这表明相对少量的视觉信息能够显著增强噪声环境下的言语理解,这支持了听力受损的参与者使用视觉信息来提高言语理解,即使视觉信息不完整。