Chen Siyi, Shi Zhuanghua, Zang Xuelian, Zhu Xiuna, Assumpção Leonardo, Müller Hermann J, Geyer Thomas
General and Experimental Psychology, Department of Psychology, LMU Munich, Leopoldstr 13, 80802, Munich, Germany.
Center for Cognition and Brain Disorders, Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.
Atten Percept Psychophys. 2020 May;82(4):1682-1694. doi: 10.3758/s13414-019-01907-0.
It is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participants responded to the orientation of a visual target singleton presented among seven homogenous visual distractors. Four tactile stimuli, two to different fingers of each hand, were presented either simultaneously with or prior to the visual stimuli. The identity of the stimulated fingers provided the crossmodal context cue: in half of the trials, a given visual target location was consistently paired with a given tactile configuration. The visual stimuli were presented above the unseen fingers, ensuring spatial correspondence between vision and touch. We found no evidence of crossmodal contextual cueing when the two sets of items (tactile, visual) were presented simultaneously (Experiment 1). However, a reliable crossmodal effect emerged when the tactile distractors preceded the onset of visual stimuli 700 ms (Experiment 2). But crossmodal cueing disappeared again when, after an initial learning phase, participants flipped their hands, making the tactile distractors appear at different positions in external space while their somatotopic positions remained unchanged (Experiment 3). In all experiments, participants were unable to explicitly discriminate learned from novel multisensory arrays. These findings indicate that search-facilitating context memory can be established across vision and touch. However, in order to guide visual search, the (predictive) tactile configurations must be remapped from their initial somatotopic into a common external representational format.
视觉目标位置相对于固定位置的视觉干扰物的统计学习有助于视觉搜索,这一点已得到充分证实。在本研究中,我们调查了这种情境线索效应是否也能跨模态起作用,即从触觉到视觉。参与者对七个同质视觉干扰物中呈现的视觉目标单一物的方向做出反应。四个触觉刺激,分别施加于每只手的两个不同手指,与视觉刺激同时或在视觉刺激之前呈现。受刺激手指的身份提供了跨模态情境线索:在一半的试验中,给定的视觉目标位置始终与给定的触觉配置配对。视觉刺激呈现在看不见的手指上方,确保视觉和触觉之间的空间对应。当两组项目(触觉、视觉)同时呈现时,我们没有发现跨模态情境线索的证据(实验1)。然而,当触觉干扰物在视觉刺激开始前700毫秒呈现时,出现了可靠的跨模态效应(实验2)。但是,在初始学习阶段后,当参与者翻转双手时,跨模态线索效应再次消失,此时触觉干扰物在外部空间中的位置不同,但它们的躯体定位位置保持不变(实验3)。在所有实验中,参与者无法明确区分已学习的多感官阵列和新的多感官阵列。这些发现表明,促进搜索的情境记忆可以在视觉和触觉之间建立。然而,为了指导视觉搜索,(预测性的)触觉配置必须从其最初的躯体定位重新映射到共同的外部表征格式。