Suppr超能文献

听力受损受试者的视听语音识别:辅音识别、句子识别及视听整合

Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition, and auditory-visual integration.

作者信息

Grant K W, Walden B E, Seitz P F

机构信息

Walter Reed Army Medical Center, Army Audiology and Speech Center, Washington, DC 20307-5001, USA.

出版信息

J Acoust Soc Am. 1998 May;103(5 Pt 1):2677-90. doi: 10.1121/1.422788.

Abstract

Factors leading to variability in auditory-visual (AV) speech recognition include the subject's ability to extract auditory (A) and visual (V) signal-related cues, the integration of A and V cues, and the use of phonological, syntactic, and semantic context. In this study, measures of A, V, and AV recognition of medial consonants in isolated nonsense syllables and of words in sentences were obtained in a group of 29 hearing-impaired subjects. The test materials were presented in a background of speech-shaped noise at 0-dB signal-to-noise ratio. Most subjects achieved substantial AV benefit for both sets of materials relative to A-alone recognition performance. However, there was considerable variability in AV speech recognition both in terms of the overall recognition score achieved and in the amount of audiovisual gain. To account for this variability, consonant confusions were analyzed in terms of phonetic features to determine the degree of redundancy between A and V sources of information. In addition, a measure of integration ability was derived for each subject using recently developed models of AV integration. The results indicated that (1) AV feature reception was determined primarily by visual place cues and auditory voicing + manner cues, (2) the ability to integrate A and V consonant cues varied significantly across subjects, with better integrators achieving more AV benefit, and (3) significant intra-modality correlations were found between consonant measures and sentence measures, with AV consonant scores accounting for approximately 54% of the variability observed for AV sentence recognition. Integration modeling results suggested that speechreading and AV integration training could be useful for some individuals, potentially providing as much as 26% improvement in AV consonant recognition.

摘要

导致视听(AV)语音识别变异性的因素包括受试者提取听觉(A)和视觉(V)信号相关线索的能力、A和V线索的整合,以及语音、句法和语义语境的运用。在本研究中,对29名听力受损受试者进行了孤立无意义音节和句子中单词的中间辅音的A、V和AV识别测量。测试材料在信噪比为0分贝的语音形状噪声背景下呈现。相对于仅听觉识别表现,大多数受试者在两组材料上都获得了显著的AV增益。然而,在AV语音识别方面,无论是在整体识别分数还是在视听增益量方面都存在相当大的变异性。为了解释这种变异性,从语音特征方面分析辅音混淆情况,以确定A和V信息源之间的冗余程度。此外,使用最近开发的AV整合模型为每个受试者得出了一项整合能力指标。结果表明:(1)AV特征接收主要由视觉位置线索和听觉浊音+方式线索决定;(2)A和V辅音线索的整合能力在受试者之间差异显著,整合能力较好的受试者获得的AV增益更多;(3)在辅音测量和句子测量之间发现了显著的模态内相关性,AV辅音分数约占AV句子识别中观察到的变异性的54%。整合建模结果表明,唇读和AV整合训练对一些个体可能有用,有可能使AV辅音识别提高多达26%。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验