Suppr超能文献

情绪一致与不一致的面部-身体静态组(ECIFBSS)的验证。

Validation of the Emotionally Congruent and Incongruent Face-Body Static Set (ECIFBSS).

作者信息

Puffet Anne-Sophie, Rigoulot Simon

机构信息

Department of Psychology, University of Quebec at Trois-Rivières, Trois-Rivières, Canada.

Research group CogNAC (Cognition, Neurosciences, Affect and Behaviour), Trois-Rivières, Canada.

出版信息

Behav Res Methods. 2025 Jan 3;57(1):41. doi: 10.3758/s13428-024-02550-w.

Abstract

Frequently, we perceive emotional information through multiple channels (e.g., face, voice, posture). These cues interact, facilitating emotional perception when congruent (similar across channels) compared to incongruent (different). Most previous studies on this congruency effect used stimuli from different sets, compromising their quality. In this context, we created and validated a new static stimulus set (ECIFBSS) featuring 1952 facial and body expressions of basic emotions in congruent and incongruent situations. We photographed 40 actors expressing facial emotions and body postures (anger, disgust, happiness, neutral, fear, surprise, and sadness) in both congruent and incongruent situations. The validation was conducted in two parts. In the first part, 76 participants performed a recognition task on facial and bodily expressions separately. In the second part, 40 participants performed the same recognition task, along with an evaluation of four features: intensity, authenticity, arousal, and valence. All emotions (face and body) were well recognized. Consistent with the literature, facial emotions were recognized better than body postures. Happiness was the most recognized facial emotion, while fear was the least. Among body expressions, anger had the highest recognition, while disgust was the least accurately recognized. Finally, facial and bodily expressions were considered moderately authentic, and the evaluation of intensity, valence, and arousal aligned with the dimensional model. The ECIFBSS offers static stimuli for studying facial and body expressions of basic emotions, providing a new tool to explore integrating emotional information from various channels and their reciprocal influence.

摘要

我们常常通过多种渠道(如面部、声音、姿势)感知情绪信息。这些线索相互作用,与不一致(不同)的情况相比,当一致(跨渠道相似)时,有助于情绪感知。以往关于这种一致性效应的大多数研究使用的是来自不同集合的刺激,这影响了它们的质量。在此背景下,我们创建并验证了一个新的静态刺激集(ECIFBSS),其中包含1952种在一致和不一致情况下基本情绪的面部和身体表情。我们拍摄了40名演员在一致和不一致情况下表达面部情绪和身体姿势(愤怒、厌恶、快乐、中性、恐惧、惊讶和悲伤)的照片。验证分两部分进行。在第一部分中,76名参与者分别对面部和身体表情执行识别任务。在第二部分中,40名参与者执行相同的识别任务,同时对四个特征进行评估:强度、真实性、唤醒度和效价。所有情绪(面部和身体)都得到了很好的识别。与文献一致,面部情绪比身体姿势识别得更好。快乐是最容易被识别的面部情绪,而恐惧是最不容易被识别的。在身体表情中,愤怒的识别率最高,而厌恶的识别准确率最低。最后,面部和身体表情被认为具有适度的真实性,并且对强度、效价和唤醒度的评估与维度模型一致。ECIFBSS提供了用于研究基本情绪的面部和身体表情的静态刺激,为探索整合来自各种渠道的情绪信息及其相互影响提供了一种新工具。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验