Iordanescu Lucica, Grabowecky Marcia, Suzuki Satoru
Department of Psychology, Northwestern University, 2029 Sheridan Road, Evanston, IL 60208, United States.
Acta Psychol (Amst). 2011 Jun;137(2):252-9. doi: 10.1016/j.actpsy.2010.07.017. Epub 2010 Sep 22.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., "meow"), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., "meow") of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., "meow") should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.
基于空间和时间上的重合,听觉和视觉过程显然会相互增强。我们最近关于视觉搜索的研究结果表明,基于多模态体验,听觉信号也会增强特定物体的视觉显著性。例如,我们倾向于看到一个物体(如一只猫)并同时听到其特征声音(如“喵”),看到物体时说出它的名字,阅读单词时发出单词的读音,但我们不太可能看到一个单词(如“猫”)并同时听到所命名物体的特征声音(如“喵”)。如果基于这种体验关联模式发生视听增强,播放特征声音(如“喵”)应该会促进对相应物体(如猫的图像)的视觉搜索,听到物体的名字应该会促进对相应物体和相应单词的视觉搜索,但播放特征声音不应该促进对相应物体名称的视觉搜索。我们目前和之前的研究结果共同证实了这些体验关联预测。我们最近还表明,基于物体的视听交互作用迅速发生(在220毫秒内),并引导初始扫视朝向目标物体。如果基于物体的视听增强是自动且持久的,一个有趣的应用是在目标稀少时,如行李安检期间,使用特征声音来促进视觉搜索。当在仅10%的试验中呈现枪时,我们的参与者在其他物体中搜索枪。当每次试验都播放枪声时(主要是在没有枪的试验中),搜索时间加快;重要的是,播放枪声促进了有枪和无枪情况下的反应,这表明基于物体的视听增强持续提高了枪的可检测性,而不仅仅是偏向有枪情况下的反应。因此,源自体验关联的基于物体的视听交互作用迅速且持久地增加了相应物体的视觉显著性。