Suppr超能文献

有声言语感知中的视听整合:对听力损失成人和听力正常成人在安静和噪声环境下言语识别的影响。

Audiovisual Integration in Cued Speech Perception: Impact on Speech Recognition in Quiet and Noise Among Adults With Hearing Loss and Those With Typical Hearing.

作者信息

Caron Cora Jirschik, Vilain Coriandre, Schwartz Jean-Luc, Leybaert Jacqueline, Colin Cécile

机构信息

CRCN, Université libre de Bruxelles, Belgium.

GIPSA Laboratory, University of Grenoble Alpes, CNRS, Grenoble INP, France.

出版信息

J Speech Lang Hear Res. 2025 Aug 12;68(8):4158-4176. doi: 10.1044/2025_JSLHR-24-00334. Epub 2025 Jul 21.

Abstract

PURPOSE

This study aimed to investigate audiovisual (AV) integration of cued speech (CS) gestures with the auditory input presented in quiet and amidst noise while controlling for visual speech decoding. Additionally, the study considered participants' auditory status and auditory abilities as well as their abilities to produce and decode CS in speech perception.

METHOD

Thirty-one adults with hearing loss (HL) and proficient in CS decoding participated, alongside 52 adults with typical hearing (TH), consisting of 14 CS interpreters and 38 individuals naive regarding the system. The study employed a speech recognition test that presented CS gestures, lipreading, and lipreading integrated with CS gestures, either without sound or combined with speech sounds in quiet or amidst noise.

RESULTS

Participants with HL and lower auditory abilities integrated the auditory input with CS gestures and increased their recognition scores by 44% in quiet conditions of speech recognition. For participants with HL and higher auditory abilities, integrating CS gestures with the auditory input mixed with noise increased recognition scores by 43.1% over the auditory-only condition. For all participants with HL, CS integrated with lipreading produced optimal recognition regardless of their auditory abilities, while for those with TH, adding CS gestures did not enhance lipreading, and AV benefits were observed only when lipreading was integrated with the auditory input presented amidst noise.

CONCLUSIONS

Individuals with HL are able to integrate CS gestures with auditory input. Visually supporting auditory speech with CS gestures improves speech recognition in noise and also in quiet conditions of communication for participants with HL and low auditory abilities.

摘要

目的

本研究旨在调查在控制视觉语音解码的同时,提示语音(CS)手势与安静环境及噪声环境中呈现的听觉输入的视听(AV)整合情况。此外,该研究还考虑了参与者的听觉状态和听觉能力,以及他们在语音感知中产生和解码CS的能力。

方法

31名成年听力损失(HL)患者且精通CS解码,以及52名听力正常(TH)的成年人参与其中,后者包括14名CS口译员和38名对该系统不了解的个体。该研究采用了一项语音识别测试,呈现CS手势、唇读以及唇读与CS手势相结合的情况,测试时要么没有声音,要么在安静环境或噪声环境中与语音声音相结合。

结果

听觉能力较低的HL参与者将听觉输入与CS手势进行整合,在安静的语音识别条件下,他们的识别分数提高了44%。对于听觉能力较高的HL参与者,将CS手势与混合噪声的听觉输入进行整合,相比仅听声音的条件,识别分数提高了43.1%。对于所有HL参与者,无论其听觉能力如何,CS与唇读相结合都能产生最佳识别效果;而对于TH参与者,添加CS手势并不能增强唇读效果,只有当唇读与噪声环境中呈现的听觉输入相结合时,才会观察到视听益处。

结论

HL个体能够将CS手势与听觉输入进行整合。用CS手势在视觉上支持听觉语音,可改善噪声环境中的语音识别,对于HL且听觉能力较低的参与者,在安静的交流环境中也有改善。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验