Latif Nida, Alsius Agnès, Munhall K G
Department of Psychology, McGill University, Stewart Biology Building N6/7, 1205 Dr. Penfield Avenue, Montreal, QC, H3A 1B1, Canada.
Department of Psychology, Queen's University, Humphrey Hall 307, 62 Arch Street, Kingston, ON, K7L 3N6, Canada.
Atten Percept Psychophys. 2018 Jan;80(1):27-41. doi: 10.3758/s13414-017-1428-0.
When engaging in conversation, we efficiently go back and forth with our partner, organizing our contributions in reciprocal turn-taking behavior. Using multiple auditory and visual cues, we make online decisions about when it is the appropriate time to take our turn. In two experiments, we demonstrated, for the first time, that auditory and visual information serve complementary roles when making such turn-taking decisions. We presented clips of single utterances spoken by individuals engaged in conversations in audiovisual, auditory-only or visual-only modalities. These utterances occurred either right before a turn exchange (i.e., 'Turn-Ends') or right before the next sentence spoken by the same talker (i.e., 'Turn-Continuations'). In Experiment 1, participants discriminated between Turn-Ends and Turn-Continuations in order to synchronize a button-press response to the moment the talker would stop speaking. We showed that participants were best at discriminating between Turn-Ends and Turn-Continuations in the audiovisual condition. However, in terms of response synchronization, participants were equally precise at timing their responses to a Turn-End in the audiovisual and auditory-only conditions, showing no advantage of visual information. In Experiment 2, we used a gating paradigm, where increasing segments of Turns-Ends and Turn-Continuations were presented, and participants predicted if a turn exchange would occur at the end of the sentence. We found an audiovisual advantage in detecting an upcoming turn early in the perception of a turn exchange. Together, these results suggest that visual information functions as an early signal indicating an upcoming turn exchange while auditory information is used to precisely time a response to the turn end.
在进行对话时,我们与对话伙伴高效地来回交流,通过相互轮流发言的行为来组织我们的话语。利用多种听觉和视觉线索,我们实时做出何时轮到自己发言的决定。在两项实验中,我们首次证明,在做出这种轮流发言的决定时,听觉和视觉信息发挥着互补作用。我们呈现了以视听、仅听觉或仅视觉模式进行对话的个体说出的单个话语片段。这些话语要么出现在轮次转换之前(即“轮次结束”),要么出现在同一个说话者说出下一句话之前(即“轮次延续”)。在实验1中,参与者区分“轮次结束”和“轮次延续”,以便将按键反应与说话者停止说话的时刻同步。我们发现,参与者在视听条件下最擅长区分“轮次结束”和“轮次延续”。然而,在反应同步方面,参与者在视听和仅听觉条件下对“轮次结束”做出反应的时间同样精确,并未显示出视觉信息的优势。在实验2中,我们使用了一种选通范式,呈现越来越多的“轮次结束”和“轮次延续”片段,参与者预测在句子结尾是否会发生轮次转换。我们发现在感知轮次转换的早期阶段,检测即将到来的轮次转换存在视听优势。综合来看,这些结果表明,视觉信息作为一种早期信号,指示即将到来的轮次转换,而听觉信息则用于精确确定对轮次结束做出反应的时间。