Suppr超能文献

用于个性化预测明日情绪、健康和压力的深度可穿戴传感器特征的早期与晚期模态融合

Early versus Late Modality Fusion of Deep Wearable Sensor Features for Personalized Prediction of Tomorrow's Mood, Health, and Stress.

作者信息

Li Boning, Sano Akane

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:5896-5899. doi: 10.1109/EMBC44109.2020.9175463.

Abstract

Predicting mood, health, and stress can sound an early alarm against mental illness. Multi-modal data from wearable sensors provide rigorous and rich insights into one's internal states. Recently, deep learning-based features on continuous high-resolution sensor data have outperformed statistical features in several ubiquitous and affective computing applications including sleep detection and depression diagnosis. Motivated by this, we investigate multi-modal data fusion strategies featuring deep representation learning of skin conductance, skin temperature, and acceleration data to predict self-reported mood, health, and stress scores (0 - 100) of college students (N = 239). Our cross-validated results from the early fusion framework exhibit a significantly higher (p <; 0.05) prediction precision over the late fusion for unseen users. Therefore, our findings call attention to the benefits of fusing physiological data modalities at a low level and corroborate the predictive efficacy of the deeply learned features.

摘要

预测情绪、健康状况和压力水平能够对精神疾病发出早期警报。可穿戴传感器收集的多模态数据能为个人的内部状态提供严谨且丰富的洞察。最近,基于深度学习的连续高分辨率传感器数据特征在包括睡眠检测和抑郁症诊断在内的多个普适计算及情感计算应用中,表现优于统计特征。受此启发,我们研究了多模态数据融合策略,该策略以对皮肤电导率、皮肤温度和加速度数据进行深度表征学习为特色,用于预测大学生(N = 239)自我报告的情绪、健康和压力得分(0 - 100)。我们早期融合框架的交叉验证结果显示,对于未见过的用户,其预测精度显著高于后期融合(p < 0.05)。因此,我们的研究结果凸显了在低层次融合生理数据模态的益处,并证实了深度学习特征的预测效力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验