Suppr超能文献

对来自他人社交互动的情感信号进行视听整合。

Audiovisual integration of emotional signals from others' social interactions.

作者信息

Piwek Lukasz, Pollick Frank, Petrini Karin

机构信息

Behaviour Research Lab, Bristol Business School, University of the West of England Bristol, UK.

School of Psychology, College of Science and Engineering, University of Glasgow Glasgow, UK.

出版信息

Front Psychol. 2015 May 8;9:116. doi: 10.3389/fpsyg.2015.00611. eCollection 2015.

Abstract

Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity.

摘要

情绪的视听感知通常通过单个角色的展示来进行研究(例如,一个演员的面部声音和/或身体声音)。然而,在现实生活中,人类常常面临更复杂的多感官社交情境,涉及不止一个人。在此,我们探讨先前在较简单社交情境中发现的情绪识别中的视听促进作用是否能扩展到更复杂和自然的情境中。在两个实验中使用了由两个互动主体的生物运动和声音组成的刺激。在实验1中,向参与者呈现视觉、听觉、听觉滤波/有噪声以及视听一致和不一致的片段。我们要求参与者判断这两个主体是在愉快地还是愤怒地互动。在实验2中,另一组参与者重复与实验1相同的任务,同时尝试忽略视觉或听觉信息。两个实验的结果表明,当听觉线索的可靠性降低时,参与者在情绪判断中会更看重视觉线索。这反过来又转化为多感官条件下情绪识别准确性的提高。因此,我们的研究结果指出了一种情绪信号多感官整合的共同机制,而与社交刺激的复杂性无关。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验