Korea Advanced Institute of Science and Technology, Graduate School of Knowledge Service Engineering, Daejeon, 34141, South Korea.
Khalifa University of Science and Technology, Department of Biomedical Engineering, Abu Dhabi, 127788, United Arab Emirates.
Sci Data. 2020 Sep 8;7(1):293. doi: 10.1038/s41597-020-00630-y.
Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.
在低成本移动传感器普及的背景下,识别社交互动中的情绪具有许多潜在的应用,但由于缺乏自然情感交互数据,这仍然是一个挑战。大多数现有的情绪数据集不支持研究在自然环境中产生的特殊情绪,因为它们是在受限的环境中收集的。因此,在社交互动的背景下研究情绪需要一个新的数据集,而 K-EmoCon 就是这样一个具有自然对话中连续情绪全面注释的多模态数据集。该数据集包含多模态测量,包括视听记录、脑电图和外周生理信号,使用现成的设备从 16 次关于社会问题的大约 10 分钟长的成对辩论中获取。与以前的数据集不同,它包括来自三个可用视角的情感注释:自我、辩论伙伴和外部观察者。评分者在观看辩论镜头时每隔 5 秒标记一次情绪表现,根据唤醒度-效价和 18 种额外的分类情绪进行标记。由此产生的 K-EmoCon 是第一个公开的情绪数据集,可容纳社交互动中情绪的多角度评估。