Suppr超能文献

CorrNet:使用可穿戴生理传感器进行视频观看的细粒度情绪识别。

CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors.

机构信息

Multimedia Computing Group, Delft University of Technology, 2600AA Delft, The Netherlands.

Centrum Wiskunde & Informatica (CWI), 1098XG Amsterdam, The Netherlands.

出版信息

Sensors (Basel). 2020 Dec 24;21(1):52. doi: 10.3390/s21010052.

Abstract

Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1-4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.

摘要

在任何时间、任何地点识别用户在观看短视频时的情绪对于促进视频内容的定制化和个性化至关重要。然而,大多数研究要么对每个视频刺激物进行单一情绪分类,要么局限于静态的桌面环境。为了解决这个问题,我们提出了一种基于相关的情绪识别算法(CorrNet),该算法仅使用可穿戴的生理信号(如皮肤电活动、心率)即可识别每个实例(信号的细粒度片段)的效价和唤醒度(V-A)。CorrNet 利用每个实例内部(内部模态特征)和同一视频刺激物不同实例之间的特征(基于相关的特征)。我们首先在室内桌面情感数据集(CASE)上测试我们的方法,然后在我们使用智能手环和可穿戴眼动追踪器收集的室外移动情感数据集(MERCA)上进行测试。结果表明,对于独立于主体的二元分类(高低),CorrNet 产生了有希望的识别准确率:CASE 上的 V-A 准确率为 76.37%和 74.03%,MERCA 上的 V-A 准确率为 70.29%和 68.15%。我们的研究结果表明:(1)1-4 秒的实例片段长度可获得最高的识别准确率;(2)实验室级和可穿戴传感器之间的准确率相当,即使在低采样率(≤64Hz)下也是如此;(3)大量的中性 V-A 标签是连续情感标注的一个人工制品,导致识别性能参差不齐。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2c10/7795677/babb04562e7c/sensors-21-00052-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验