Suppr超能文献

三模态语音:语音感知中的视听触整合。

Tri-modal speech: Audio-visual-tactile integration in speech perception.

机构信息

New Zealand Institute of Language, Brain, and Behaviour, University of Canterbury, 20 Kirkwood Avenue, Upper Riccarton, Christchurch 8041, New Zealand.

School of Psychology, Speech and Hearing, University of Canterbury, 20 Kirkwood Avenue, Upper Riccarton, Christchurch 8041, New Zealand.

出版信息

J Acoust Soc Am. 2019 Nov;146(5):3495. doi: 10.1121/1.5134064.

Abstract

Speech perception is a multi-sensory experience. Visual information enhances [Sumby and Pollack (1954). J. Acoust. Soc. Am. 25, 212-215] and interferes [McGurk and MacDonald (1976). Nature 264, 746-748] with speech perception. Similarly, tactile information, transmitted by puffs of air arriving at the skin and aligned with speech audio, alters [Gick and Derrick (2009). Nature 462, 502-504] auditory speech perception in noise. It has also been shown that aero-tactile information influences visual speech perception when an auditory signal is absent [Derrick, Bicevskis, and Gick (2019a). Front. Commun. Lang. Sci. 3(61), 1-11]. However, researchers have not yet identified the combined influence of aero-tactile, visual, and auditory information on speech perception. The effects of matching and mismatching visual and tactile speech on two-way forced-choice auditory syllable-in-noise classification tasks were tested. The results showed that both visual and tactile information altered the signal-to-noise threshold for accurate identification of auditory signals. Similar to previous studies, the visual component has a strong influence on auditory syllable-in-noise identification, as evidenced by a 28.04 dB improvement in SNR between matching and mismatching visual stimulus presentations. In comparison, the tactile component had a small influence resulting in a 1.58 dB SNR match-mismatch range. The effects of both the audio and tactile information were shown to be additive.

摘要

言语感知是一种多感官体验。视觉信息增强了[Sumby 和 Pollack (1954)。J. Acoust. Soc. Am. 25, 212-215]和干扰了[McGurk 和 MacDonald (1976)。自然 264, 746-748]言语感知。同样,通过与言语音频对齐的空气喷流传输的触觉信息改变了[Gick 和 Derrick (2009)。自然 462, 502-504]在噪声中的听觉言语感知。当听觉信号不存在时,也已经表明气触觉信息会影响视觉言语感知[Derrick、Bicevskis 和 Gick (2019a)。前沿。通讯。语言科学 3(61), 1-11]。然而,研究人员尚未确定气触觉、视觉和听觉信息对言语感知的综合影响。测试了匹配和不匹配的视觉和触觉言语对双向强制选择听觉音节在噪声中分类任务的影响。结果表明,视觉和触觉信息都改变了准确识别听觉信号的信噪比阈值。与先前的研究相似,视觉成分对听觉音节在噪声中的识别有很强的影响,这表现在匹配和不匹配视觉刺激呈现之间 SNR 提高了 28.04 dB。相比之下,触觉成分的影响较小,导致 SNR 匹配-不匹配范围为 1.58 dB。音频和触觉信息的影响均表现为相加的。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验