Suppr超能文献

当听到狗叫声有助于识别狗时:语义一致的声音会调节对掩蔽图片的识别。

When hearing the bark helps to identify the dog: semantically-congruent sounds modulate the identification of masked pictures.

机构信息

Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, OX1 3UD, UK.

出版信息

Cognition. 2010 Mar;114(3):389-404. doi: 10.1016/j.cognition.2009.10.012. Epub 2009 Nov 11.

Abstract

We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically-congruent sound improved, whereas a semantically-incongruent sound impaired, participants' picture identification performance, as compared to performance in the white-noise control condition. A significant facilitatory effect was also observed at SOAs of around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest that audiovisual semantic interactions likely occur in a short-term buffer which rapidly accesses, and temporarily retains, the semantic representations of multisensory stimuli in order to form a coherent multisensory object representation. These results are explained in terms of Potter's (1993) notion of conceptual short-term memory.

摘要

我们报告了一系列旨在评估视听语义一致性对视觉呈现图片识别影响的实验。参与者对一系列短暂呈现、然后快速掩蔽的图片进行非速度识别反应。在刺激起始异步(SOA)为 0 到 533 毫秒(听觉滞后)之间,有时会与图片一起呈现自然声音。该声音相对于目标图片可以是语义上一致的、语义上不一致的或中性的(白噪声)。结果表明,当图片和声音同时出现时,语义上一致的声音会提高,而语义上不一致的声音会降低参与者的图片识别表现,与白噪声对照条件下的表现相比。在约 300 毫秒的 SOA 时还观察到显著的促进作用,而在最长的间隔(533 毫秒)时则没有观察到这种语义一致性效应。因此,这些结果表明,与视觉和听觉刺激相关的神经表示可以在共享的语义系统中相互作用。此外,这种跨模态语义交互不受组成听觉和视觉刺激严格时间一致性的限制。因此,我们认为视听语义交互可能发生在一个短期缓冲区中,该缓冲区可以快速访问和暂时保留多感觉刺激的语义表示,以形成一个连贯的多感觉对象表示。这些结果是根据 Potter(1993)的概念性短期记忆概念来解释的。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验