• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

量化视觉在噪声环境下对语音感知的贡献。

Quantifying the contribution of vision to speech perception in noise.

作者信息

MacLeod A, Summerfield Q

出版信息

Br J Audiol. 1987 May;21(2):131-41. doi: 10.3109/03005368709077786.

DOI:10.3109/03005368709077786
PMID:3594015
Abstract

The intelligibility of sentences presented in noise improves when the listener can view the talker's face. Our aims were to quantify this benefit, and to relate it to individual differences among subjects in lipreading ability and among sentences in lipreading difficulty. Auditory and audiovisual speech-reception thresholds (SRTs) were measured in 20 listeners with normal hearing. Sixty sentences, selected to range in the difficulty with which they could be lipread (with vision alone) from easy to hard, were presented for identification in white noise. Using the ascending method of limits, the SRT was defined as the lowest signal-to-noise ratio at which all three 'key words' in each sentence could be identified correctly. Measured as the difference in dB between auditory-alone and audiovisual SRTs, 'audiovisual benefit' averaged 11 dB, ranging from 6 to 15 dB among subjects, and from 3 to 22 dB among sentences. As predicted, audiovisual benefit is a measure of lipreading ability. It was highly correlated with visual-alone performance (n = 20, r = 0.86, P less than 0.01). Likewise, those sentences which were easiest to lipread gave a higher measure of benefit from vision in audiovisual conditions than did sentences that were hard to lipread (n = 60, r = 0.92, P less than 0.01). The results establish the basis of an efficient test of speech-reception disability in which measures are freed from the floor and ceiling effects encountered when percentage correct is used as the dependent variable.

摘要

当听众能够看到说话者的面部时,在噪声环境中呈现的句子的可懂度会提高。我们的目的是量化这种益处,并将其与受试者在唇读能力方面的个体差异以及句子在唇读难度方面的差异联系起来。对20名听力正常的听众测量了听觉和视听言语接受阈值(SRT)。选择了60个句子,这些句子在仅靠视觉进行唇读的难度上从易到难不等,在白噪声中呈现以进行识别。使用极限递增法,SRT被定义为每个句子中所有三个“关键词”都能被正确识别的最低信噪比。以听觉单独SRT和视听SRT之间的分贝差来衡量,“视听益处”平均为11分贝,受试者之间的范围为6至15分贝,句子之间的范围为3至22分贝。正如所预测的,视听益处是唇读能力的一种度量。它与仅靠视觉的表现高度相关(n = 20,r = 0.86,P小于0.01)。同样,那些最容易唇读的句子在视听条件下从视觉中获得的益处比难以唇读的句子更高(n = 60,r = 0.92,P小于0.01)。这些结果建立了一种有效的言语接受障碍测试的基础,在这种测试中,测量不受使用正确百分比作为因变量时遇到的下限和上限效应的影响。

相似文献

1
Quantifying the contribution of vision to speech perception in noise.量化视觉在噪声环境下对语音感知的贡献。
Br J Audiol. 1987 May;21(2):131-41. doi: 10.3109/03005368709077786.
2
A procedure for measuring auditory and audio-visual speech-reception thresholds for sentences in noise: rationale, evaluation, and recommendations for use.一种测量噪声环境中句子的听觉和视听言语接受阈值的程序:原理、评估及使用建议。
Br J Audiol. 1990 Feb;24(1):29-43. doi: 10.3109/03005369009077840.
3
The benefit obtained from visually displayed text from an automatic speech recognizer during listening to speech presented in noise.在收听有噪声干扰的语音时,从自动语音识别器的可视文本显示中获得的益处。
Ear Hear. 2008 Dec;29(6):838-52. doi: 10.1097/AUD.0b013e31818005bd.
4
Development of the Listening in Spatialized Noise-Sentences Test (LISN-S).空间噪声句子听力测试(LISN-S)的开发。
Ear Hear. 2007 Apr;28(2):196-211. doi: 10.1097/AUD.0b013e318031267f.
5
Validating a Method to Assess Lipreading, Audiovisual Gain, and Integration During Speech Reception With Cochlear-Implanted and Normal-Hearing Subjects Using a Talking Head.验证一种使用说话人头评估植入人工耳蜗和正常听力受试者的唇读、视听增益和言语感知整合的方法。
Ear Hear. 2018 May/Jun;39(3):503-516. doi: 10.1097/AUD.0000000000000502.
6
The influence of semantically related and unrelated text cues on the intelligibility of sentences in noise.语义相关和不相关的文本提示对噪声中句子可理解性的影响。
Ear Hear. 2011 Nov-Dec;32(6):e16-25. doi: 10.1097/AUD.0b013e318228036a.
7
Audiovisual asynchrony detection and speech intelligibility in noise with moderate to severe sensorineural hearing impairment.中重度感音神经性听力损失患者的视听时滞检测与噪声下言语可懂度
Ear Hear. 2011 Sep-Oct;32(5):582-92. doi: 10.1097/AUD.0b013e31820fca23.
8
Impact of visual cues on directional benefit and preference: Part I--laboratory tests.视觉线索对方向获益和偏好的影响:第一部分——实验室测试。
Ear Hear. 2010 Feb;31(1):22-34. doi: 10.1097/AUD.0b013e3181bc767e.
9
The effect of speechreading on the speech-reception threshold of sentences in noise.视读对噪声环境中句子言语接受阈的影响。
J Acoust Soc Am. 1987 Dec;82(6):2145-7. doi: 10.1121/1.395659.
10
The Norwegian Hearing in Noise Test for Children.挪威儿童噪声环境听力测试
Ear Hear. 2016 Jan-Feb;37(1):80-92. doi: 10.1097/AUD.0000000000000224.

引用本文的文献

1
Lip-Reading: Advances and Unresolved Questions in a Key Communication Skill.唇读:一项关键沟通技能的进展与未解决的问题
Audiol Res. 2025 Jul 21;15(4):89. doi: 10.3390/audiolres15040089.
2
Seeing a Talker's Mouth Reduces the Effort of Perceiving Speech and Repairing Perceptual Mistakes for Listeners With Cochlear Implants.看到说话者的嘴部动作可减轻人工耳蜗佩戴者感知语音和纠正感知错误的难度。
Ear Hear. 2025 Jun 16. doi: 10.1097/AUD.0000000000001683.
3
Synchrony perception of audiovisual speech is a reliable, yet individual construct.
视听语音的同步感知是一种可靠但因人而异的结构。
Sci Rep. 2025 May 7;15(1):15909. doi: 10.1038/s41598-025-00243-8.
4
Neural mechanisms of lipreading in the Polish-speaking population: effects of linguistic complexity and sex differences.波兰语人群唇读的神经机制:语言复杂性和性别差异的影响。
Sci Rep. 2025 Apr 17;15(1):13253. doi: 10.1038/s41598-025-98026-8.
5
Eye Movements in Silent Visual Speech Track Unheard Acoustic Signals and Relate to Hearing Experience.无声视觉言语中的眼动追踪未听到的声学信号并与听觉体验相关。
eNeuro. 2025 Apr 28;12(4). doi: 10.1523/ENEURO.0055-25.2025. Print 2025 Apr.
6
Using the Listening2Faces App with Three Young Adults with Autism: A Feasibility Study.对三名患有自闭症的年轻人使用“聆听面孔”应用程序:一项可行性研究。
Adv Neurodev Disord. 2025;9(1):51-63. doi: 10.1007/s41252-023-00390-x. Epub 2024 Jan 19.
7
Plasticity in older infants' perception of phonetic contrasts: The role of selective attention in context.大龄婴儿对语音对比感知的可塑性:情境中选择性注意的作用。
Infancy. 2025 Jan-Feb;30(1):e12620. doi: 10.1111/infa.12620. Epub 2024 Aug 27.
8
Evidence for a Causal Dissociation of the McGurk Effect and Congruent Audiovisual Speech Perception via TMS to the Left pSTS.经左颞上沟重复经颅磁刺激对 McGurk 效应和一致视听言语感知的因果分离的证据。
Multisens Res. 2024 Aug 16;37(4-5):341-363. doi: 10.1163/22134808-bja10129.
9
The impact of face coverings on audio-visual contributions to communication with conversational speech.面部遮盖物对通过对话语音进行交流的视听效果的影响。
Cogn Res Princ Implic. 2024 Apr 23;9(1):25. doi: 10.1186/s41235-024-00552-y.
10
Eye movement differences when recognising and learning moving and static faces.识别和学习动态与静态面孔时的眼动差异。
Q J Exp Psychol (Hove). 2025 Apr;78(4):744-765. doi: 10.1177/17470218241252145. Epub 2024 May 14.