Driver J
Department of Psychology, Birkbeck College, University of London, UK.
Nature. 1996 May 2;381(6577):66-8. doi: 10.1038/381066a0.
Mechanisms of human attention allow selective processing of just the relevant events among the many stimuli bombarding our senses. Most laboratory studies examine attention within just a single sense, but in the real world many important events are specified multimodally, as in verbal communication. Speech comprises visual lip movements as well as sounds, and lip-reading contributes to speech perception, even for listeners with good hearing, by a process of audiovisual integration. Such examples raise the problem of how we coordinate our spatial attention across the sensory modalities, to select sights and sounds from a common source for further processing. Here we show that this problem is alleviated by allowing some cross-modal matching before attentional selection is completed. Cross-modal matching can lead to an illusion, whereby sounds are mislocated at their apparent visual source; this crossmodal illusion can enhance selective spatial attention to speech sounds.
人类注意力机制能够在众多冲击我们感官的刺激中,对仅相关的事件进行选择性处理。大多数实验室研究仅在单一感官内考察注意力,但在现实世界中,许多重要事件是以多模态方式呈现的,比如在言语交流中。言语包含视觉上的嘴唇动作以及声音,并且唇读通过视听整合过程,即便对于听力良好的听众而言,也有助于言语感知。此类例子引出了一个问题,即我们如何跨感官模态协调空间注意力,以便从共同来源中选择视觉和声音进行进一步处理。在此我们表明,通过在注意力选择完成之前允许一定程度的跨模态匹配,这一问题可得到缓解。跨模态匹配会导致一种错觉,即声音会被错误定位到其明显的视觉来源处;这种跨模态错觉能够增强对语音的选择性空间注意力。