Suppr超能文献

音乐、语音和物体动作的视听同步感知。

Audiovisual synchrony perception for music, speech, and object actions.

作者信息

Vatakis Argiro, Spence Charles

机构信息

Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, OX1 3UD, UK.

出版信息

Brain Res. 2006 Sep 21;1111(1):134-42. doi: 10.1016/j.brainres.2006.05.078. Epub 2006 Jul 31.

Abstract

We investigated the perception of synchrony for complex audiovisual events. In Experiment 1, a series of music (guitar and piano), speech (sentences), and object action video clips were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which stream (auditory or visual) appeared to have been presented first. Temporal discrimination accuracy was significantly better for the object actions than for the speech video clips, and both were significantly better than for the music video clips. In order to investigate whether or not these differences in TOJ performance were driven by differences in stimulus familiarity, we conducted a second experiment using brief speech (syllables), music (guitar), and object action video clips of fixed duration together with temporally reversed (i.e., less familiar) versions of the same stimuli. The results showed no main effect of stimulus type on temporal discrimination accuracy. Interestingly, however, reversing the video clips resulted in a significant decrement in temporal discrimination accuracy as compared to the normally presented for the music and object actions clips, but not for the speech stimuli. Overall, our results suggest that cross-modal temporal discrimination performance is better for audiovisual stimuli of lower complexity as compared to stimuli having continuously varying properties (e.g., syllables versus words and/or sentences).

摘要

我们研究了对复杂视听事件的同步感知。在实验1中,使用恒定刺激法,以一系列刺激起始异步(SOA)呈现了一系列音乐(吉他和钢琴)、语音(句子)和物体动作视频片段。参与者对哪个流(听觉或视觉)似乎先呈现进行了无速度时间顺序判断(TOJ)。物体动作的时间辨别准确率显著高于语音视频片段,且两者均显著高于音乐视频片段。为了研究TOJ表现的这些差异是否由刺激熟悉度的差异驱动,我们进行了第二个实验,使用固定时长的简短语音(音节)、音乐(吉他)和物体动作视频片段以及相同刺激的时间反转(即不太熟悉)版本。结果显示刺激类型对时间辨别准确率没有主效应。然而,有趣的是,与正常呈现的音乐和物体动作片段相比,视频片段反转导致时间辨别准确率显著下降,但语音刺激没有。总体而言,我们的结果表明,与具有连续变化属性的刺激(例如音节与单词和/或句子)相比,跨模态时间辨别表现对于较低复杂性的视听刺激更好。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验