Suppr超能文献

听力损失儿童的听觉、视觉及视听言语的检测与注意力

Detection and Attention for Auditory, Visual, and Audiovisual Speech in Children with Hearing Loss.

作者信息

Jerger Susan, Damian Markus F, Karl Cassandra, Abdi Hervé

机构信息

School of Behavioral Brain Sciences, University of Texas at Dallas, Richardson, Texas, USA.

Callier Center for Communication Disorders, University of Texas at Dallas, Richardson, Texas, USA.

出版信息

Ear Hear. 2020 May/Jun;41(3):508-520. doi: 10.1097/AUD.0000000000000798.

Abstract

OBJECTIVES

Efficient multisensory speech detection is critical for children who must quickly detect/encode a rapid stream of speech to participate in conversations and have access to the audiovisual cues that underpin speech and language development, yet multisensory speech detection remains understudied in children with hearing loss (CHL). This research assessed detection, along with vigilant/goal-directed attention, for multisensory versus unisensory speech in CHL versus children with normal hearing (CNH).

DESIGN

Participants were 60 CHL who used hearing aids and communicated successfully aurally/orally and 60 age-matched CNH. Simple response times determined how quickly children could detect a preidentified easy-to-hear stimulus (70 dB SPL, utterance "buh" presented in auditory only [A], visual only [V], or audiovisual [AV] modes). The V mode formed two facial conditions: static versus dynamic face. Faster detection for multisensory (AV) than unisensory (A or V) input indicates multisensory facilitation. We assessed mean responses and faster versus slower responses (defined by first versus third quartiles of response-time distributions), which were respectively conceptualized as: faster responses (first quartile) reflect efficient detection with efficient vigilant/goal-directed attention and slower responses (third quartile) reflect less efficient detection associated with attentional lapses. Finally, we studied associations between these results and personal characteristics of CHL.

RESULTS

Unisensory A versus V modes: Both groups showed better detection and attention for A than V input. The A input more readily captured children's attention and minimized attentional lapses, which supports A-bound processing even by CHL who were processing low fidelity A input. CNH and CHL did not differ in ability to detect A input at conversational speech level. Multisensory AV versus A modes: Both groups showed better detection and attention for AV than A input. The advantage for AV input was facial effect (both static and dynamic faces), a pattern suggesting that communication is a social interaction that is more than just words. Attention did not differ between groups; detection was faster in CHL than CNH for AV input, but not for A input. Associations between personal characteristics/degree of hearing loss of CHL and results: CHL with greatest deficits in detection of V input had poorest word recognition skills and CHL with greatest reduction of attentional lapses from AV input had poorest vocabulary skills. Both outcomes are consistent with the idea that CHL who are processing low fidelity A input depend disproportionately on V and AV input to learn to identify words and associate them with concepts. As CHL aged, attention to V input improved. Degree of HL did not influence results.

CONCLUSIONS

Understanding speech-a daily challenge for CHL-is a complex task that demands efficient detection of and attention to AV speech cues. Our results support the clinical importance of multisensory approaches to understand and advance spoken communication by CHL.

摘要

目的

高效的多感官语音检测对于儿童至关重要,他们必须快速检测/编码快速的语音流以参与对话,并获取支持言语和语言发展的视听线索,但听力损失儿童(CHL)的多感官语音检测仍未得到充分研究。本研究评估了CHL与听力正常儿童(CNH)在多感官与单感官语音方面的检测情况,以及警觉/目标导向注意力。

设计

参与者包括60名使用助听器且能成功通过听觉/口语交流的CHL以及60名年龄匹配的CNH。简单反应时间确定了儿童能够多快检测到预先确定的易于听到的刺激(70 dB SPL,仅以听觉[A]、仅以视觉[V]或视听[AV]模式呈现的发音“buh”)。V模式形成了两种面部条件:静态脸与动态脸。多感官(AV)输入比单感官(A或V)输入检测更快表明多感官促进作用。我们评估了平均反应以及更快与更慢的反应(由反应时间分布的第一与第三四分位数定义),它们分别被概念化为:更快的反应(第一四分位数)反映了具有高效警觉/目标导向注意力的高效检测,而更慢的反应(第三四分位数)反映了与注意力不集中相关的效率较低的检测。最后,我们研究了这些结果与CHL个人特征之间的关联。

结果

单感官A与V模式:两组对A输入的检测和注意力均优于V输入。A输入更容易吸引儿童的注意力并使注意力不集中最小化,这支持了即使是处理低保真度A输入的CHL也存在偏向A的加工。CNH和CHL在对话语音水平检测A输入的能力上没有差异。多感官AV与A模式:两组对AV输入的检测和注意力均优于A输入。AV输入的优势在于面部效应(静态脸和动态脸),这种模式表明交流是一种不仅仅是言语的社会互动。两组之间的注意力没有差异;对于AV输入,CHL的检测比CNH更快,但对于A输入则不然。CHL的个人特征/听力损失程度与结果之间的关联:V输入检测缺陷最大的CHL单词识别技能最差,而AV输入导致注意力不集中减少最多的CHL词汇技能最差。这两个结果都与以下观点一致,即处理低保真度A输入的CHL在很大程度上依赖V和AV输入来学习识别单词并将它们与概念联系起来。随着CHL年龄增长,对V输入的注意力有所改善。听力损失程度不影响结果。

结论

理解语音——CHL的日常挑战——是一项复杂的任务,需要高效检测和关注AV语音线索。我们的结果支持了多感官方法对于理解和促进CHL口语交流的临床重要性。

相似文献

本文引用的文献

8
Multisensory Integration in Cochlear Implant Recipients.人工耳蜗植入者的多感觉整合。
Ear Hear. 2017 Sep/Oct;38(5):521-538. doi: 10.1097/AUD.0000000000000435.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验