• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

双感觉增强:当语音清晰可闻且完整时的唇读优势。

Bisensory augmentation: A speechreading advantage when speech is clearly audible and intact.

作者信息

Arnold Paul, Hill Fiona

机构信息

University of Manchester, UK.

出版信息

Br J Psychol. 2001 May;92 Part 2:339-355.

PMID:11802877
Abstract

Reisberg, McLean, and Goldfield (1987) have shown that vision plays a part in the perception of speech even when the auditory signal is clearly audible and intact. Using an alternative method the present study replicated their finding. Clearly audible spoken messages were presented in audio-only and audio-visual conditions, and the adult participants' resulting comprehension was measured. Stories were presented in French (Expt 1), in a Glaswegian accent (Expt 2), and by presenting spoken information that was semantically and syntactically complex (Experiment 3). Three separate groups of 16 adult female participants aged 19-21 participated in the three experiments. In all three experiments, comprehension improved significantly when the speaker's face was visible.

摘要

雷斯伯格、麦克林和戈德菲尔德(1987年)已经表明,即便听觉信号清晰可闻且完整无缺,视觉在言语感知中仍发挥着作用。本研究采用另一种方法重复验证了他们的研究结果。在仅音频和视听两种条件下呈现清晰可闻的口语信息,并测量成年参与者的理解结果。故事分别用法语呈现(实验1)、用格拉斯哥口音呈现(实验2)以及通过呈现语义和句法复杂的口语信息来呈现(实验3)。三组各有16名年龄在19至21岁之间的成年女性参与者参与了这三项实验。在所有三项实验中,当说话者的面部可见时,理解能力均有显著提高。

相似文献

1
Bisensory augmentation: A speechreading advantage when speech is clearly audible and intact.双感觉增强:当语音清晰可闻且完整时的唇读优势。
Br J Psychol. 2001 May;92 Part 2:339-355.
2
Bisensory augmentation: a speechreading advantage when speech is clearly audible and intact.双感觉增强:当语音清晰可闻且完整时的唇读优势。
Br J Psychol. 2001 May;92(Pt 2):339-55.
3
Audio-visual interactions with intact clearly audible speech.与清晰可听的完整语音的视听交互。
Q J Exp Psychol A. 2004 Aug;57(6):1103-21. doi: 10.1080/02724980343000701.
4
Bisensory augmentation of complex spoken passages.复杂口语段落的双感觉增强。
Br J Audiol. 2001 Feb;35(1):53-8. doi: 10.1080/03005364.2001.11742731.
5
Seeing to hear better: evidence for early audio-visual interactions in speech identification.为听得更清而看:语音识别中早期视听交互的证据
Cognition. 2004 Sep;93(2):B69-78. doi: 10.1016/j.cognition.2004.01.006.
6
Visual influences on alignment to voice onset time.视觉对语音起始时间对准的影响。
J Speech Lang Hear Res. 2010 Apr;53(2):262-72. doi: 10.1044/1092-4388(2009/08-0247). Epub 2010 Mar 10.
7
Multisensory integration sites identified by perception of spatial wavelet filtered visual speech gesture information.通过对空间小波滤波视觉语音手势信息的感知识别出的多感官整合位点。
J Cogn Neurosci. 2004 Jun;16(5):805-16. doi: 10.1162/089892904970771.
8
The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.年龄、听力和工作记忆对从自动语音识别系统获得的语音理解增益的影响。
Ear Hear. 2009 Apr;30(2):262-72. doi: 10.1097/AUD.0b013e3181987063.
9
Bimodal audio-visual training enhances auditory adaptation process.双峰视听训练可增强听觉适应过程。
Neuroreport. 2009 Sep 23;20(14):1231-4. doi: 10.1097/WNR.0b013e32832fbef8.
10
The interplay between the auditory and visual modality for end-of-utterance detection.用于话语结束检测的听觉和视觉模态之间的相互作用。
J Acoust Soc Am. 2008 Jan;123(1):354-65. doi: 10.1121/1.2816561.

引用本文的文献

1
Cloth Mask with Window as an Alternative to Opaque Mask for Students with Speech, Language, and Hearing Deficits for Infection Risk Mitigation.带窗口的布口罩作为不透明口罩的替代品,供有言语、语言和听力缺陷的学生使用,以降低感染风险。
Kans J Med. 2025 Feb 17;18(1):1-4. doi: 10.17161/kjm.vol18.22422. eCollection 2025 Jan-Feb.
2
Understanding discourse in face-to-face settings: The impact of multimodal cues and listening conditions.理解面对面情境中的话语:多模态线索和倾听条件的影响。
J Exp Psychol Learn Mem Cogn. 2025 May;51(5):837-854. doi: 10.1037/xlm0001399. Epub 2024 Oct 14.
3
Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language.
说话人评估母语和非母语语音信息时的视觉扫描模式。
PLoS One. 2024 May 28;19(5):e0304150. doi: 10.1371/journal.pone.0304150. eCollection 2024.
4
Eye movement differences when recognising and learning moving and static faces.识别和学习动态与静态面孔时的眼动差异。
Q J Exp Psychol (Hove). 2025 Apr;78(4):744-765. doi: 10.1177/17470218241252145. Epub 2024 May 14.
5
Performance in an Audiovisual Selective Attention Task Using Speech-Like Stimuli Depends on the Talker Identities, But Not Temporal Coherence.使用类似语音的刺激进行视听选择性注意任务的表现取决于说话者的身份,但与时间连贯性无关。
Trends Hear. 2023 Jan-Dec;27:23312165231207235. doi: 10.1177/23312165231207235.
6
The effect of gaze on EEG measures of multisensory integration in a cocktail party scenario.在鸡尾酒会场景中,注视对多感官整合的脑电图测量的影响。
bioRxiv. 2023 Aug 24:2023.08.23.554451. doi: 10.1101/2023.08.23.554451.
7
Mouth and facial informativeness norms for 2276 English words.2276 个英语单词的口面部信息量常模。
Behav Res Methods. 2024 Aug;56(5):4786-4801. doi: 10.3758/s13428-023-02216-z. Epub 2023 Aug 21.
8
Semantic priming from McGurk words: Priming depends on perception.来自麦格克效应词汇的语义启动:启动取决于感知。
Atten Percept Psychophys. 2023 May;85(4):1219-1237. doi: 10.3758/s13414-023-02689-2. Epub 2023 Apr 25.
9
Effect of wearing personal protective equipment on acoustic characteristics and speech perception during COVID-19.新冠疫情期间佩戴个人防护装备对声学特性及言语感知的影响。
Appl Acoust. 2022 Aug;197:108940. doi: 10.1016/j.apacoust.2022.108940. Epub 2022 Jul 22.
10
Violation of non-adjacent rule dependencies elicits greater attention to a talker's mouth in 15-month-old infants.违反非相邻规则依存关系会引起 15 个月大婴儿更多地关注说话者的嘴。
Infancy. 2022 Sep;27(5):963-971. doi: 10.1111/infa.12489. Epub 2022 Jul 14.