Suppr超能文献

基于 2D 姿势估计的多视角情感表达数据集。

Multi-view emotional expressions dataset using 2D pose estimation.

机构信息

Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China.

Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China.

出版信息

Sci Data. 2023 Sep 22;10(1):649. doi: 10.1038/s41597-023-02551-y.

Abstract

Human body expressions convey emotional shifts and intentions of action and, in some cases, are even more effective than other emotion models. Despite many datasets of body expressions incorporating motion capture available, there is a lack of more widely distributed datasets regarding naturalized body expressions based on the 2D video. In this paper, therefore, we report the multi-view emotional expressions dataset (MEED) using 2D pose estimation. Twenty-two actors presented six emotional (anger, disgust, fear, happiness, sadness, surprise) and neutral body movements from three viewpoints (left, front, right). A total of 4102 videos were captured. The MEED consists of the corresponding pose estimation results (i.e., 397,809 PNG files and 397,809 JSON files). The size of MEED exceeds 150 GB. We believe this dataset will benefit the research in various fields, including affective computing, human-computer interaction, social neuroscience, and psychiatry.

摘要

人体表情传达情感变化和行动意图,在某些情况下,甚至比其他情感模型更有效。尽管有许多包含运动捕捉的人体表情数据集,但基于 2D 视频的更广泛分布的自然化人体表情数据集却很少。因此,在本文中,我们报告了使用 2D 姿势估计的多视角情感表达数据集 (MEED)。22 名演员从三个视角(左、前、右)呈现了六种情感(愤怒、厌恶、恐惧、快乐、悲伤、惊讶)和中性的身体动作。共捕获了 4102 个视频。MEED 包含相应的姿势估计结果(即 397809 个 PNG 文件和 397809 个 JSON 文件)。MEED 的大小超过 150GB。我们相信这个数据集将有益于包括情感计算、人机交互、社会神经科学和精神病学在内的各个领域的研究。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cf58/10516935/9f6bc46b5202/41597_2023_2551_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验