Wahn Basil, König Peter
Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück Osnabrück, Germany.
Neurobiopsychology, Institute of Cognitive Science, Universität OsnabrückOsnabrück, Germany; Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-EppendorfHamburg, Germany.
Front Integr Neurosci. 2016 Mar 8;10:13. doi: 10.3389/fnint.2016.00013. eCollection 2016.
Humans constantly process and integrate sensory input from multiple sensory modalities. However, the amount of input that can be processed is constrained by limited attentional resources. A matter of ongoing debate is whether attentional resources are shared across sensory modalities, and whether multisensory integration is dependent on attentional resources. Previous research suggested that the distribution of attentional resources across sensory modalities depends on the the type of tasks. Here, we tested a novel task combination in a dual task paradigm: Participants performed a self-terminated visual search task and a localization task in either separate sensory modalities (i.e., haptics and vision) or both within the visual modality. Tasks considerably interfered. However, participants performed the visual search task faster when the localization task was performed in the tactile modality in comparison to performing both tasks within the visual modality. This finding indicates that tasks performed in separate sensory modalities rely in part on distinct attentional resources. Nevertheless, participants integrated visuotactile information optimally in the localization task even when attentional resources were diverted to the visual search task. Overall, our findings suggest that visual search and tactile localization partly rely on distinct attentional resources, and that optimal visuotactile integration is not dependent on attentional resources.
人类不断处理和整合来自多种感官模态的感官输入。然而,能够处理的输入量受到有限注意力资源的限制。一个持续争论的问题是注意力资源是否在不同感官模态之间共享,以及多感官整合是否依赖于注意力资源。先前的研究表明,注意力资源在不同感官模态之间的分配取决于任务类型。在这里,我们在双任务范式中测试了一种新颖的任务组合:参与者在单独的感官模态(即触觉和视觉)中或在视觉模态内同时执行一个自终止视觉搜索任务和一个定位任务。任务之间存在显著干扰。然而,与在视觉模态内同时执行两个任务相比,当定位任务在触觉模态中执行时,参与者执行视觉搜索任务的速度更快。这一发现表明,在不同感官模态中执行的任务部分依赖于不同的注意力资源。尽管如此,即使注意力资源被转移到视觉搜索任务上,参与者在定位任务中仍能最佳地整合视觉触觉信息。总体而言,我们的研究结果表明,视觉搜索和触觉定位部分依赖于不同的注意力资源,并且最佳的视觉触觉整合并不依赖于注意力资源。