Lund University Cognitive Science, Lund University, Lund, Sweden.
STMS Lab, UMR 9912 (IRCAM/CNRS/SU), Paris, France.
Sci Rep. 2023 Apr 4;13(1):5507. doi: 10.1038/s41598-023-32133-2.
Emotional speech perception is a multisensory process. When speaking with an individual we concurrently integrate the information from their voice and face to decode e.g., their feelings, moods, and emotions. However, the physiological reactions-such as the reflexive dilation of the pupil-associated to these processes remain mostly unknown. That is the aim of the current article, to investigate whether pupillary reactions can index the processes underlying the audiovisual integration of emotional signals. To investigate this question, we used an algorithm able to increase or decrease the smiles seen in a person's face or heard in their voice, while preserving the temporal synchrony between visual and auditory channels. Using this algorithm, we created congruent and incongruent audiovisual smiles, and investigated participants' gaze and pupillary reactions to manipulated stimuli. We found that pupil reactions can reflect emotional information mismatch in audiovisual speech. In our data, when participants were explicitly asked to extract emotional information from stimuli, the first fixation within emotionally mismatching areas (i.e., the mouth) triggered pupil dilation. These results reveal that pupil dilation can reflect the dynamic integration of audiovisual emotional speech and provide insights on how these reactions are triggered during stimulus perception.
情感言语感知是一个多感觉过程。当与个体交谈时,我们会同时整合来自他们的声音和面部的信息,以解码例如他们的感受、情绪和情感。然而,这些过程相关的生理反应——例如瞳孔的反射性扩张——在很大程度上仍然未知。这就是当前文章的目的,即探究瞳孔反应是否可以作为指标,反映情感信号的视听整合过程。为了探究这个问题,我们使用了一种算法,能够增加或减少人脸上看到的笑容或听到的声音中的笑容,同时保持视觉和听觉通道之间的时间同步性。使用这种算法,我们创建了一致和不一致的视听笑容,并研究了参与者对操纵刺激的注视和瞳孔反应。我们发现,瞳孔反应可以反映视听言语中的情感信息不匹配。在我们的数据中,当参与者被明确要求从刺激中提取情感信息时,在情感不匹配区域(即嘴巴)的第一次注视会引发瞳孔扩张。这些结果表明,瞳孔扩张可以反映视听情感言语的动态整合,并提供有关这些反应在刺激感知过程中是如何触发的见解。