Spivey M J, Tyler M J, Eberhard K M, Tanenhaus M K
Department of Psychology, Cornell University, Ithaca, NY 14583, USA.
Psychol Sci. 2001 Jul;12(4):282-6. doi: 10.1111/1467-9280.00352.
During an individual's normal interaction with the environment and other humans, visual and linguistic signals often coincide and can be integrated very quickly. This has been clearly demonstrated in recent eye tracking studies showing that visual perception constrains on-line comprehension of spoken language. In a modified visual search task, we found the inverse, that real-time language comprehension can also constrain visual perception. In standard visual search tasks, the number of distractors in the display strongly affects search time for a target defined by a conjunction of features, but not for a target defined by a single feature. However we found that when a conjunction target was identified by a spoken instruction presented concurrently with the visual display, the incremental processing of spoken language allowed the search process to proceed in a manner considerably less affected by the number of distractors. These results suggest that perceptual systems specializedfor language and for vision interact more fluidly than previously thought.
在个体与环境及他人的正常互动过程中,视觉信号和语言信号常常同时出现,并且能够很快地整合起来。最近的眼动追踪研究已经清楚地证明了这一点,这些研究表明视觉感知会限制口语的在线理解。在一项经过改进的视觉搜索任务中,我们发现了相反的情况,即实时语言理解也会限制视觉感知。在标准的视觉搜索任务中,显示中的干扰项数量会强烈影响由特征组合定义的目标的搜索时间,但不会影响由单一特征定义的目标的搜索时间。然而,我们发现,当通过与视觉显示同时呈现的语音指令来识别组合目标时,口语的增量处理使得搜索过程能够以一种受干扰项数量影响小得多的方式进行。这些结果表明,专门用于语言和视觉的感知系统之间的互动比之前认为的更加流畅。