Suppr超能文献

通过自动面部动作单元分析感知情绪效价和唤醒动态。

Sensing emotional valence and arousal dynamics through automated facial action unit analysis.

机构信息

Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo, Kyoto, 606-8507, Japan.

Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan.

出版信息

Sci Rep. 2024 Aug 22;14(1):19563. doi: 10.1038/s41598-024-70563-8.

Abstract

Information about the concordance between dynamic emotional experiences and objective signals is practically useful. Previous studies have shown that valence dynamics can be estimated by recording electrical activity from the muscles in the brows and cheeks. However, whether facial actions based on video data and analyzed without electrodes can be used for sensing emotion dynamics remains unknown. We investigated this issue by recording video of participants' faces and obtaining dynamic valence and arousal ratings while they observed emotional films. Action units (AUs) 04 (i.e., brow lowering) and 12 (i.e., lip-corner pulling), detected through an automated analysis of the video data, were negatively and positively correlated with dynamic ratings of subjective valence, respectively. Several other AUs were also correlated with dynamic valence or arousal ratings. Random forest regression modeling, interpreted using the SHapley Additive exPlanation tool, revealed non-linear associations between the AUs and dynamic ratings of valence or arousal. These results suggest that an automated analysis of facial expression video data can be used to estimate dynamic emotional states, which could be applied in various fields including mental health diagnosis, security monitoring, and education.

摘要

关于动态情绪体验与客观信号之间一致性的信息具有实际意义。先前的研究表明,可以通过记录眉毛和脸颊肌肉的电活动来估计情绪的变化。然而,基于视频数据的面部动作,且无需电极进行分析,是否可以用于感知情绪动态,这一点尚不清楚。我们通过记录参与者的面部视频,并在他们观看情感电影时获得动态效价和唤醒评分,来研究这个问题。通过对视频数据的自动分析检测到动作单元(AU)04(即皱眉)和 12(即嘴角下拉),分别与主观效价的动态评分呈负相关和正相关。其他几个 AU 也与动态效价或唤醒评分相关。随机森林回归模型的解释使用 SHapley Additive exPlanation 工具,揭示了 AU 与效价或唤醒的动态评分之间的非线性关联。这些结果表明,面部表情视频数据的自动分析可用于估计动态情绪状态,这可应用于各种领域,包括心理健康诊断、安全监控和教育。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bd6d/11341571/56aa7b36da9e/41598_2024_70563_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验