Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, United States of America.
PLoS One. 2013 Apr 29;8(4):e62802. doi: 10.1371/journal.pone.0062802. Print 2013.
Cross-modal processing depends strongly on the compatibility between different sensory inputs, the relative timing of their arrival to brain processing components, and on how attention is allocated. In this behavioral study, we employed a cross-modal audio-visual Stroop task in which we manipulated the within-trial stimulus-onset-asynchronies (SOAs) of the stimulus-component inputs, the grouping of the SOAs (blocked vs. random), the attended modality (auditory or visual), and the congruency of the Stroop color-word stimuli (congruent, incongruent, neutral) to assess how these factors interact within a multisensory context. One main result was that visual distractors produced larger incongruency effects on auditory targets than vice versa. Moreover, as revealed by both overall shorter response times (RTs) and relative shifts in the psychometric incongruency-effect functions, visual-information processing was faster and produced stronger and longer-lasting incongruency effects than did auditory. When attending to either modality, stimulus incongruency from the other modality interacted with SOA, yielding larger effects when the irrelevant distractor occurred prior to the attended target, but no interaction with SOA grouping. Finally, relative to neutral-stimuli, and across the wide range of the SOAs employed, congruency led to substantially more behavioral facilitation than did incongruency to interference, in contrast to findings that within-modality stimulus-compatibility effects tend to be more evenly split between facilitation and interference. In sum, the present findings reveal several key characteristics of how we process the stimulus compatibility of cross-modal sensory inputs, reflecting stimulus processing patterns that are critical for successfully navigating our complex multisensory world.
跨模态处理强烈依赖于不同感觉输入之间的兼容性、它们到达大脑处理组件的相对时间以及注意力的分配方式。在这项行为研究中,我们采用了一种跨模态视听 Stroop 任务,其中我们操纵了刺激组件输入的试验内刺激起始时的异步(SOA)、SOA 的分组(分组与随机)、注意的模态(听觉或视觉)以及 Stroop 颜色-单词刺激的一致性(一致、不一致、中性),以评估这些因素在多感官环境中是如何相互作用的。一个主要结果是,视觉干扰对听觉目标产生的不一致效应大于反之。此外,通过整体更短的反应时间(RT)和心理测量不一致效应函数的相对变化揭示,视觉信息处理比听觉更快,并产生更强和更持久的不一致效应。当注意到任一种模态时,来自另一种模态的刺激不一致性与 SOA 相互作用,当不相关的干扰物先于被注意的目标出现时,会产生更大的效应,但与 SOA 分组没有相互作用。最后,与中性刺激相比,在使用的广泛 SOA 范围内,一致性导致的行为促进作用远大于不一致性导致的干扰作用,这与在单一模态中刺激兼容性效应倾向于在促进和干扰之间平均分配的发现形成对比。总之,这些发现揭示了我们处理跨模态感觉输入的刺激兼容性的几个关键特征,反映了对于成功导航我们复杂的多感官世界至关重要的刺激处理模式。