Suppr超能文献

萨班哲大学动态人脸数据库 (SUDFace):开发和验证具有中性面部表情的朗诵和自由演讲的视听刺激集。

The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions.

机构信息

Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey.

SCALab - Sciences Cognitives et Sciences Affectives, Université de Lille, CNRS, Lille, France.

出版信息

Behav Res Methods. 2023 Sep;55(6):3078-3099. doi: 10.3758/s13428-022-01951-z. Epub 2022 Aug 26.

Abstract

Faces convey a wide range of information, including one's identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.

摘要

面部传达了广泛的信息,包括一个人的身份以及情绪和心理状态。面部感知是认知科学、社会心理学和神经科学等多个研究领域的主要研究课题。通常,刺激是从一系列可用的面部数据库中选择的。然而,尽管面部是高度动态的,但大多数数据库都由静态面部刺激组成。在这里,我们介绍萨班哲大学动态面部 (SUDFace) 数据库。SUDFace 数据库由 150 个高分辨率视听视频组成,这些视频是在受控的实验室环境中获取的,并以 60 Hz 的帧率存储在 1920×1080 像素的分辨率下。该多模态数据库由每个人类模型的三个正面视图视频组成,分为三种不同条件:发出两个脚本文本(条件 1 和 2)和一个自由演讲(条件 3)。SUDFace 数据库的主要重点是提供一组具有中性面部表情和自然言语发音的大型动态面部图像。所有刺激中,变量如面部方向、照明和配饰(穿孔、耳环、面部毛发等)保持不变。我们提供了详细的刺激信息,包括面部特征(面部长度、眼睛宽度等的像素级计算)和演讲(例如,演讲的持续时间和重复次数)。在两项验证实验中,共有 227 名参与者使用李克特量表对每个视频的多个心理维度(例如表情的中立性和自然性、情绪和模型的感知心理状态)进行了评分。该数据库可供研究目的免费使用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验