Suied Clara, Bonneel Nicolas, Viaud-Delmon Isabelle
CNRS, UPMC UMR 7593, Hôpital de la Salpêtrière, Paris, France.
Exp Brain Res. 2009 Mar;194(1):91-102. doi: 10.1007/s00221-008-1672-6. Epub 2008 Dec 18.
Recognizing a natural object requires one to pool information from various sensory modalities, and to ignore information from competing objects. That the same semantic knowledge can be accessed through different modalities makes it possible to explore the retrieval of supramodal object concepts. Here, object-recognition processes were investigated by manipulating the relationships between sensory modalities, specifically, semantic content, and spatial alignment between auditory and visual information. Experiments were run under realistic virtual environment. Participants were asked to react as fast as possible to a target object presented in the visual and/or the auditory modality and to inhibit a distractor object (go/no-go task). Spatial alignment had no effect on object-recognition time. The only spatial effect observed was a stimulus-response compatibility between the auditory stimulus and the hand position. Reaction times were significantly shorter for semantically congruent bimodal stimuli than would be predicted by independent processing of information about the auditory and visual targets. Interestingly, this bimodal facilitation effect was twice as large as found in previous studies that also used information-rich stimuli. An interference effect was observed (i.e. longer reaction times to semantically incongruent stimuli than to the corresponding unimodal stimulus) only when the distractor was auditory. When the distractor was visual, the semantic incongruence did not interfere with object recognition. Our results show that immersive displays with large visual stimuli may provide large multimodal integration effects, and reveal a possible asymmetry in the attentional filtering of irrelevant auditory and visual information.
识别一个自然物体需要人们整合来自各种感官模态的信息,并忽略来自竞争物体的信息。通过不同模态可以获取相同的语义知识,这使得探索超模态物体概念的检索成为可能。在这里,通过操纵感官模态之间的关系,特别是语义内容以及听觉和视觉信息之间的空间对齐,对物体识别过程进行了研究。实验在逼真的虚拟环境中进行。要求参与者对以视觉和/或听觉模态呈现的目标物体尽快做出反应,并抑制干扰物体(Go/No-Go任务)。空间对齐对物体识别时间没有影响。观察到的唯一空间效应是听觉刺激与手部位置之间的刺激-反应兼容性。语义一致的双峰刺激的反应时间明显短于对听觉和视觉目标信息进行独立处理所预测的时间。有趣的是,这种双峰促进效应是之前同样使用信息丰富刺激的研究中发现的两倍。仅当干扰物是听觉时,才观察到干扰效应(即对语义不一致刺激的反应时间比对相应单峰刺激的反应时间更长)。当干扰物是视觉时,语义不一致不会干扰物体识别。我们的结果表明,具有大视觉刺激的沉浸式显示可能会提供大的多模态整合效应,并揭示了在无关听觉和视觉信息的注意力过滤中可能存在的不对称性。