Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Montessorilaan 3, 6525, HR, Nijmegen, The Netherlands.
Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525, XD, Nijmegen, The Netherlands.
Psychon Bull Rev. 2023 Apr;30(2):792-801. doi: 10.3758/s13423-022-02178-x. Epub 2022 Sep 22.
During face-to-face communication, recipients need to rapidly integrate a plethora of auditory and visual signals. This integration of signals from many different bodily articulators, all offset in time, with the information in the speech stream may either tax the cognitive system, thus slowing down language processing, or may result in multimodal facilitation. Using the classical shadowing paradigm, participants shadowed speech from face-to-face, naturalistic dyadic conversations in an audiovisual context, an audiovisual context without visual speech (e.g., lips), and an audio-only context. Our results provide evidence of a multimodal facilitation effect in human communication: participants were faster in shadowing words when seeing multimodal messages compared with when hearing only audio. Also, the more visual context was present, the fewer shadowing errors were made, and the earlier in time participants shadowed predicted lexical items. We propose that the multimodal facilitation effect may contribute to the ease of fast face-to-face conversational interaction.
在面对面交流中,接收者需要快速整合大量的听觉和视觉信号。这种来自多个不同发声体的信号的整合,所有的信号都在时间上偏移,与言语流中的信息相结合,可能会增加认知系统的负担,从而减缓语言处理的速度,或者导致多模态促进。使用经典的跟读范式,参与者在视听环境、没有视觉言语(如嘴唇)的视听环境和仅音频环境中跟读来自面对面自然对话的言语。我们的结果提供了人类交流中多模态促进效应的证据:与仅听音频相比,参与者在看到多模态信息时跟读单词更快。此外,视觉语境越多,跟读错误就越少,参与者跟读预测词汇的时间也越早。我们提出,多模态促进效应可能有助于实现快速的面对面会话交互的轻松进行。