Tsinghua University, Department of Computer Science and Technology, Beijing, 100084, China.
Tianjin University, College of Intelligence and Computing, Tianjin, 300350, China.
Sci Data. 2024 Aug 5;11(1):847. doi: 10.1038/s41597-024-03676-4.
Mixed emotions have attracted increasing interest recently, but existing datasets rarely focus on mixed emotion recognition from multimodal signals, hindering the affective computing of mixed emotions. On this basis, we present a multimodal dataset with four kinds of signals recorded while watching mixed and non-mixed emotion videos. To ensure effective emotion induction, we first implemented a rule-based video filtering step to select the videos that could elicit stronger positive, negative, and mixed emotions. Then, an experiment with 80 participants was conducted, in which the data of EEG, GSR, PPG, and frontal face videos were recorded while they watched the selected video clips. We also recorded the subjective emotional rating on PANAS, VAD, and amusement-disgust dimensions. In total, the dataset consists of multimodal signal data and self-assessment data from 73 participants. We also present technical validations for emotion induction and mixed emotion classification from physiological signals and face videos. The average accuracy of the 3-class classification (i.e., positive, negative, and mixed) can reach 80.96% when using SVM and features from all modalities, which indicates the possibility of identifying mixed emotional states.
混合情绪最近引起了越来越多的关注,但现有的数据集很少关注从多模态信号中识别混合情绪,这阻碍了混合情绪的情感计算。在此基础上,我们提出了一个包含四种信号的多模态数据集,这些信号是在观看混合和非混合情绪视频时记录的。为了确保有效的情绪诱发,我们首先实施了基于规则的视频过滤步骤,以选择能够引起更强的正、负和混合情绪的视频。然后,我们进行了一项有 80 名参与者参与的实验,在观看所选视频片段的同时记录了 EEG、GSR、PPG 和额部面部视频的数据。我们还记录了 PANAS、VAD 和愉悦-厌恶维度的主观情绪评分。总的来说,该数据集包含来自 73 名参与者的多模态信号数据和自我评估数据。我们还展示了从生理信号和面部视频中进行情绪诱发和混合情绪分类的技术验证。使用 SVM 和所有模态的特征进行 3 类分类(即正、负和混合)的平均准确率可达 80.96%,这表明识别混合情绪状态是可能的。