Suppr超能文献

K-EmoCon,一个用于自然会话中连续情感识别的多模态传感器数据集。

K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations.

机构信息

Korea Advanced Institute of Science and Technology, Graduate School of Knowledge Service Engineering, Daejeon, 34141, South Korea.

Khalifa University of Science and Technology, Department of Biomedical Engineering, Abu Dhabi, 127788, United Arab Emirates.

出版信息

Sci Data. 2020 Sep 8;7(1):293. doi: 10.1038/s41597-020-00630-y.

Abstract

Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.

摘要

在低成本移动传感器普及的背景下,识别社交互动中的情绪具有许多潜在的应用,但由于缺乏自然情感交互数据,这仍然是一个挑战。大多数现有的情绪数据集不支持研究在自然环境中产生的特殊情绪,因为它们是在受限的环境中收集的。因此,在社交互动的背景下研究情绪需要一个新的数据集,而 K-EmoCon 就是这样一个具有自然对话中连续情绪全面注释的多模态数据集。该数据集包含多模态测量,包括视听记录、脑电图和外周生理信号,使用现成的设备从 16 次关于社会问题的大约 10 分钟长的成对辩论中获取。与以前的数据集不同,它包括来自三个可用视角的情感注释:自我、辩论伙伴和外部观察者。评分者在观看辩论镜头时每隔 5 秒标记一次情绪表现,根据唤醒度-效价和 18 种额外的分类情绪进行标记。由此产生的 K-EmoCon 是第一个公开的情绪数据集,可容纳社交互动中情绪的多角度评估。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3df6/7479607/5600f1b26d32/41597_2020_630_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验