Amedi A, von Kriegstein K, van Atteveldt N M, Beauchamp M S, Naumer M J
Laboratory for Magnetic Brain Stimulation, Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA.
Exp Brain Res. 2005 Oct;166(3-4):559-71. doi: 10.1007/s00221-005-2396-5. Epub 2005 Jul 19.
The perception of objects is a cognitive function of prime importance. In everyday life, object perception benefits from the coordinated interplay of vision, audition, and touch. The different sensory modalities provide both complementary and redundant information about objects, which may improve recognition speed and accuracy in many circumstances. We review crossmodal studies of object recognition in humans that mainly employed functional magnetic resonance imaging (fMRI). These studies show that visual, tactile, and auditory information about objects can activate cortical association areas that were once believed to be modality-specific. Processing converges either in multisensory zones or via direct crossmodal interaction of modality-specific cortices without relay through multisensory regions. We integrate these findings with existing theories about semantic processing and propose a general mechanism for crossmodal object recognition: The recruitment and location of multisensory convergence zones varies depending on the information content and the dominant modality.
物体感知是一项至关重要的认知功能。在日常生活中,物体感知受益于视觉、听觉和触觉的协同作用。不同的感官模态提供了关于物体的互补和冗余信息,这在许多情况下可能会提高识别速度和准确性。我们回顾了主要采用功能磁共振成像(fMRI)的人类物体识别跨模态研究。这些研究表明,关于物体的视觉、触觉和听觉信息可以激活曾经被认为是特定模态的皮质联合区域。处理过程要么在多感官区域汇聚,要么通过特定模态皮质的直接跨模态相互作用,而无需通过多感官区域进行中继。我们将这些发现与现有的语义处理理论相结合,提出了一种跨模态物体识别的一般机制:多感官汇聚区域的招募和位置取决于信息内容和主导模态。