Suppr超能文献

在多说话人场景中搜索视听对应关系。

Searching for audiovisual correspondence in multiple speaker scenarios.

机构信息

Department of Psychology, Queen's University, 62 Arch st., Kingston, Ontario K7L3N6, Canada.

出版信息

Exp Brain Res. 2011 Sep;213(2-3):175-83. doi: 10.1007/s00221-011-2624-0. Epub 2011 Mar 23.

Abstract

A critical question in multisensory processing is how the constant information flow that arrives to our different senses is organized in coherent representations. Some authors claim that pre-attentive detection of inter-sensory correlations supports crossmodal binding, whereas other findings indicate that attention plays a crucial role. We used visual and auditory search tasks for speaking faces to address the role of selective spatial attention in audiovisual binding. Search efficiency amongst faces for the match with a voice declined with the number of faces being monitored concurrently, consistent with an attentive search mechanism. In contrast, search amongst auditory speech streams for the match with a face was independent of the number of streams being monitored concurrently, as long as localization was not required. We suggest that the fundamental differences in the way in which auditory and visual information is encoded play a limiting role in crossmodal binding. Based on these unisensory limitations, we provide a unified explanation for several previous apparently contradictory findings.

摘要

多感官处理中的一个关键问题是,到达我们不同感官的恒定信息流如何组织成连贯的表示。一些作者声称,对感觉间相关性的前注意检测支持跨模态绑定,而其他发现则表明注意起着至关重要的作用。我们使用视觉和听觉搜索任务来解决选择性空间注意在视听绑定中的作用。在与声音匹配的面孔中,随着同时监测的面孔数量的增加,搜索效率会下降,这与注意力搜索机制一致。相比之下,在与面孔匹配的听觉语音流中,只要不需要定位,搜索就与同时监测的流的数量无关。我们认为,听觉和视觉信息编码方式的根本差异在跨模态绑定中起着限制作用。基于这些单感官限制,我们为几个以前似乎相互矛盾的发现提供了一个统一的解释。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验