Suppr超能文献

uulmMAC 数据库——用于人机交互中情感计算的多模态情感语料库。

The uulmMAC Database-A Multimodal Affective Corpus for Affective Computing in Human-Computer Interaction.

机构信息

Section Medical Psychology, University of Ulm, Frauensteige 6, 89075 Ulm, Germany.

Institute of Neural Information Processing, University of Ulm, James-Frank-Ring, 89081 Ulm, Germany.

出版信息

Sensors (Basel). 2020 Apr 17;20(8):2308. doi: 10.3390/s20082308.

Abstract

In this paper, we present a multimodal dataset for affective computing research acquired in a human-computer interaction (HCI) setting. An experimental mobile and interactive scenario was designed and implemented based on a gamified generic paradigm for the induction of dialog-based HCI relevant emotional and cognitive load states. It consists of six experimental sequences, inducing , and . Each sequence is followed by subjective feedbacks to validate the induction, a respiration baseline to level off the physiological reactions, and a summary of results. Further, prior to the experiment, three questionnaires related to emotion regulation (ERQ), emotional control (TEIQue-SF), and personality traits (TIPI) were collected from each subject to evaluate the stability of the induction paradigm. Based on this HCI scenario, the , consisting of two homogenous samples of 60 participants and 100 recording sessions was generated. We recorded 16 sensor modalities including 4 × video, 3 × audio, and 7 × biophysiological, depth, and pose streams. Further, additional labels and annotations were also collected. After recording, all data were post-processed and checked for technical and signal quality, resulting in the final dataset of 57 subjects and 95 recording sessions. The evaluation of the reported subjective feedbacks shows significant differences between the sequences, well consistent with the induced states, and the analysis of the questionnaires shows stable results. In summary, our database is a valuable contribution for the field of affective computing and multimodal data analysis: Acquired in a mobile interactive scenario close to real HCI, it consists of a large number of subjects and allows transtemporal investigations. Validated via subjective feedbacks and checked for quality issues, it can be used for affective computing and machine learning applications.

摘要

本文提出了一个在人机交互 (HCI) 环境中获取的用于情感计算研究的多模态数据集。设计并实现了一个基于游戏化通用范例的实验性移动交互场景,用于诱导基于对话的 HCI 相关情感和认知负荷状态。它由六个实验序列组成,分别诱导、和。每个序列后都有主观反馈来验证诱导,有呼吸基线来稳定生理反应,还有结果总结。此外,在实验之前,从每个参与者那里收集了三个与情绪调节 (ERQ)、情绪控制 (TEIQue-SF) 和个性特征 (TIPI) 相关的问卷,以评估诱导范式的稳定性。基于这个 HCI 场景,生成了一个包含两个同质样本的 数据集,每个样本有 60 名参与者和 100 个记录会话。我们记录了 16 种传感器模式,包括 4×视频、3×音频和 7×生物物理、深度和姿势流。此外,还收集了其他标签和注释。记录后,所有数据都经过后处理,并检查技术和信号质量,最终得到 57 名参与者和 95 个记录会话的数据集。报告的主观反馈的评估显示序列之间存在显著差异,与诱导状态非常一致,问卷的分析显示结果稳定。总的来说,我们的数据库是情感计算和多模态数据分析领域的一个有价值的贡献:在接近真实 HCI 的移动交互场景中获取,它包含大量的参与者,并允许进行跨时间的研究。通过主观反馈进行验证,并检查质量问题,它可用于情感计算和机器学习应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/02e2/7219061/bd2d11c84976/sensors-20-02308-g0A1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验