Rosenblum Lawrence D
University of California, Riverside.
Curr Dir Psychol Sci. 2008 Dec;17(6):405-409. doi: 10.1111/j.1467-8721.2008.00615.x.
Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal speech information could explain the reported automaticity, immediacy, and completeness of audiovisual speech integration. However, recent findings suggest that speech integration can be influenced by higher cognitive properties such as lexical status and semantic context. Proponents of amodal accounts will need to explain these results.
言语感知本质上是多模态的。所有感知者都会利用视觉言语(唇读)信息,并且它能很容易地与听觉言语整合。成像研究表明,大脑对听觉和视觉言语的处理方式相似。这些发现使得一些研究人员认为,言语感知是通过提取跨模态具有相同形式的非模态信息来起作用的。从这个角度来看,言语整合是输入信息本身的一种属性。非模态言语信息可以解释所报道的视听言语整合的自动性、即时性和完整性。然而,最近的研究结果表明,言语整合会受到诸如词汇状态和语义语境等更高层次认知属性的影响。非模态理论的支持者将需要解释这些结果。