Suppr超能文献

感知不一致的英语视听辅音。

Perception of incongruent audiovisual English consonants.

机构信息

Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America.

出版信息

PLoS One. 2019 Mar 21;14(3):e0213588. doi: 10.1371/journal.pone.0213588. eCollection 2019.

Abstract

Causal inference-the process of deciding whether two incoming signals come from the same source-is an important step in audiovisual (AV) speech perception. This research explored causal inference and perception of incongruent AV English consonants. Nine adults were presented auditory, visual, congruent AV, and incongruent AV consonant-vowel syllables. Incongruent AV stimuli included auditory and visual syllables with matched vowels, but mismatched consonants. Open-set responses were collected. For most incongruent syllables, participants were aware of the mismatch between auditory and visual signals (59.04%) or reported the auditory syllable (33.73%). Otherwise, participants reported the visual syllable (1.13%) or some other syllable (6.11%). Statistical analyses were used to assess whether visual distinctiveness and place, voice, and manner features predicted responses. Mismatch responses occurred more when the auditory and visual consonants were visually distinct, when place and manner differed across auditory and visual consonants, and for consonants with high visual accuracy. Auditory responses occurred more when the auditory and visual consonants were visually similar, when place and manner were the same across auditory and visual stimuli, and with consonants produced further back in the mouth. Visual responses occurred more when voicing and manner were the same across auditory and visual stimuli, and for front and middle consonants. Other responses were variable, but typically matched the visual place, auditory voice, and auditory manner of the input. Overall, results indicate that causal inference and incongruent AV consonant perception depend on salience and reliability of auditory and visual inputs and degree of redundancy between auditory and visual inputs. A parameter-free computational model of incongruent AV speech perception based on unimodal confusions, with a causal inference rule, was applied. Data from the current study present an opportunity to test and improve the generalizability of current AV speech integration models.

摘要

因果推断——即判断两个输入信号是否来自同一来源的过程——是视听(AV)语音感知中的一个重要步骤。本研究探索了因果推断和对不一致的 AV 英语辅音的感知。9 名成年人接受了听觉、视觉、一致的 AV 和不一致的 AV 辅音-元音音节的刺激。不一致的 AV 刺激包括听觉和视觉音节,它们具有匹配的元音,但辅音不匹配。收集了开放式反应。对于大多数不一致的音节,参与者意识到听觉和视觉信号之间的不匹配(59.04%)或报告了听觉音节(33.73%)。否则,参与者报告了视觉音节(1.13%)或其他音节(6.11%)。统计分析用于评估视觉独特性以及位置、声音和方式特征是否预测反应。当听觉和视觉辅音在视觉上明显不同时,当位置和方式在听觉和视觉辅音之间不同时,以及对于视觉准确性高的辅音时,不匹配的反应更常见。当听觉和视觉辅音在视觉上相似时,当位置和方式在听觉和视觉刺激之间相同时,以及对于口腔后部发出的辅音时,听觉反应更常见。当听觉和视觉刺激之间的声音和方式相同时,以及对于前位和中位辅音时,视觉反应更常见。其他反应是可变的,但通常与输入的听觉位置、听觉声音和听觉方式匹配。总体而言,结果表明,因果推断和不一致的 AV 辅音感知取决于听觉和视觉输入的显著性和可靠性,以及听觉和视觉输入之间的冗余程度。基于单峰混淆的、具有因果推断规则的不一致的 AV 语音感知的无参数计算模型被应用。当前研究的数据提供了一个机会,可以测试和提高当前的 AV 语音集成模型的通用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4fcb/6428273/1ef1f7d13589/pone.0213588.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验