Wahn Basil, König Peter
Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück Osnabrück, Germany.
Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück Osnabrück, Germany ; Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf Hamburg, Germany.
Front Psychol. 2015 Jul 29;6:1084. doi: 10.3389/fpsyg.2015.01084. eCollection 2015.
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
人类不断接收并整合来自多种感觉模态的信息。然而,注意力资源限制了能够被处理的信息量。注意力资源与多感觉处理如何相互关联尚不清楚。具体而言,出现了以下问题:(1)每种感觉模态是否存在不同的空间注意力资源?以及(2)注意力负荷是否会影响多感觉整合?我们使用双任务范式对这些问题进行了研究:参与者执行两项空间任务(多目标跟踪任务和定位任务),要么分别执行(单任务条件),要么同时执行(双任务条件)。在多目标跟踪任务中,参与者视觉跟踪几个随机移动物体中的一小部分。在定位任务中,参与者接收视觉、听觉或冗余的视觉和听觉位置线索。在双任务条件下,我们发现与单任务条件的结果相比,参与者的表现大幅下降。重要的是,无论位置线索的模态如何,参与者在双任务条件下的表现都同样出色。这一结果表明,来自不同模态的空间信息并不能促进表现,从而表明听觉和视觉模态共享空间注意力资源。此外,我们发现即使参与者在双任务条件下承受额外的注意力负荷,他们对冗余多感觉信息的整合方式也类似。总体而言,研究结果表明:(1)视觉和听觉空间注意力资源是共享 的,并且(2)空间信息的视听整合发生在注意前处理阶段。