Jesse Alexandra, Massaro Dominic W
University of California, Santa Cruz, California, USA.
Atten Percept Psychophys. 2010 Jan;72(1):209-25. doi: 10.3758/APP.72.1.209.
In the present study, we examined the distribution and processing of information over time in auditory and visual speech as it is used in unimodal and bimodal word recognition. English consonant-vowel-consonant words representing all possible initial consonants were presented as auditory, visual, or audiovisual speech in a gating task. The distribution of information over time varied across and within features. Visual speech information was generally fully available early during the phoneme, whereas auditory information was still accumulated. An audiovisual benefit was therefore already found early during the phoneme. The nature of the audiovisual recognition benefit changed, however, as more of the phoneme was presented. More features benefited at short gates rather than at longer ones. Visual speech information plays, therefore, a more important role early during the phoneme rather than later. The results of the study showed the complex interplay of information across modalities and time, since this is essential in determining the time course of audiovisual spoken-word recognition.
在本研究中,我们考察了在单峰和双峰单词识别中使用的听觉和视觉言语信息随时间的分布与处理情况。在一个选通任务中,呈现了代表所有可能起始辅音的英语辅音-元音-辅音单词,形式为听觉、视觉或视听言语。信息随时间的分布在特征之间和特征内部都有所不同。视觉言语信息通常在音素早期就已完全可用,而听觉信息仍在积累。因此,在音素早期就已发现视听优势。然而,随着更多的音素被呈现,视听识别优势的性质发生了变化。在较短的选通时间而非较长的选通时间,更多的特征受益。因此,视觉言语信息在音素早期而非后期发挥更重要的作用。研究结果表明了跨模态和时间的信息之间复杂的相互作用,因为这对于确定视听口语单词识别的时间进程至关重要。