Suppr超能文献

基于深度学习的视频压力检测。

Video-Based Stress Detection through Deep Learning.

机构信息

Department of Computer Science and Technology, Centre for Computational Mental Healthcare, Research Institute of Data Science, Tsinghua University, Beijing 100084, China.

出版信息

Sensors (Basel). 2020 Sep 28;20(19):5552. doi: 10.3390/s20195552.

Abstract

Stress has become an increasingly serious problem in the current society, threatening mankind's well-beings. With the ubiquitous deployment of video cameras in surroundings, detecting stress based on the contact-free camera sensors becomes a cost-effective and mass-reaching way without interference of artificial traits and factors. In this study, we leverage users' facial expressions and action motions in the video and present a two-leveled stress detection network (TSDNet). TSDNet firstly learns face- and action-level representations separately, and then fuses the results through a stream weighted integrator with local and global attention for stress identification. To evaluate the performance of TSDNet, we constructed a video dataset containing 2092 labeled video clips, and the experimental results on the built dataset show that: (1) TSDNet outperformed the hand-crafted feature engineering approaches with detection accuracy 85.42% and F1-Score 85.28%, demonstrating the feasibility and effectiveness of using deep learning to analyze one's face and action motions; and (2) considering both facial expressions and action motions could improve detection accuracy and F1-Score of that considering only face or action method by over 7%.

摘要

压力已成为当前社会中一个日益严重的问题,威胁着人类的福祉。随着摄像头在周围环境中的广泛部署,基于无需接触的摄像头传感器来检测压力成为一种具有成本效益且广泛适用的方法,不会受到人为特征和因素的干扰。在本研究中,我们利用视频中的用户面部表情和动作,提出了一种两级压力检测网络(TSDNet)。TSDNet 首先分别学习面部和动作级别的表示,然后通过带有局部和全局注意力的流加权集成器融合结果,用于压力识别。为了评估 TSDNet 的性能,我们构建了一个包含 2092 个标记视频片段的视频数据集,在该数据集上的实验结果表明:(1)TSDNet 优于手工特征工程方法,检测准确率为 85.42%,F1-Score 为 85.28%,证明了使用深度学习分析人脸和动作的可行性和有效性;(2)同时考虑面部表情和动作可以将检测准确率和 F1-Score 分别提高 7%以上,优于仅考虑面部或动作的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7f0/7582689/f6d097800361/sensors-20-05552-g001.jpg

相似文献

1
Video-Based Stress Detection through Deep Learning.
Sensors (Basel). 2020 Sep 28;20(19):5552. doi: 10.3390/s20195552.
2
Deep Learning-Based Pain Classifier Based on the Facial Expression in Critically Ill Patients.
Front Med (Lausanne). 2022 Mar 17;9:851690. doi: 10.3389/fmed.2022.851690. eCollection 2022.
3
PolarBearVidID: A Video-Based Re-Identification Benchmark Dataset for Polar Bears.
Animals (Basel). 2023 Feb 23;13(5):801. doi: 10.3390/ani13050801.
4
Paying attention to uncertainty: A stochastic multimodal transformers for post-traumatic stress disorder detection using video.
Comput Methods Programs Biomed. 2024 Dec;257:108439. doi: 10.1016/j.cmpb.2024.108439. Epub 2024 Sep 26.
5
Fusion of Video and Inertial Sensing for Deep Learning-Based Human Action Recognition.
Sensors (Basel). 2019 Aug 24;19(17):3680. doi: 10.3390/s19173680.
6
7
Synthetic Expressions are Better Than Real for Learning to Detect Facial Actions.
IEEE Winter Conf Appl Comput Vis. 2021 Jan;2021:1247-1256. doi: 10.1109/wacv48630.2021.00129. Epub 2021 Jun 14.
8
EAC-Net: Deep Nets with Enhancing and Cropping for Facial Action Unit Detection.
IEEE Trans Pattern Anal Mach Intell. 2018 Nov;40(11):2583-2596. doi: 10.1109/TPAMI.2018.2791608. Epub 2018 Jan 10.
9
Learning Representations for Facial Actions From Unlabeled Videos.
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):302-317. doi: 10.1109/TPAMI.2020.3011063. Epub 2021 Dec 7.

引用本文的文献

1
Stress can be detected during emotion-evoking smartphone use: a pilot study using machine learning.
Front Digit Health. 2025 Apr 30;7:1578917. doi: 10.3389/fdgth.2025.1578917. eCollection 2025.
3
Deep-Learning-Based Stress Recognition with Spatial-Temporal Facial Information.
Sensors (Basel). 2021 Nov 11;21(22):7498. doi: 10.3390/s21227498.

本文引用的文献

2
Adaptation and Validation of the Life Events and Difficulties Schedule for Use With High School Dropouts.
J Res Adolesc. 2017 Sep;27(3):683-689. doi: 10.1111/jora.12296. Epub 2016 Nov 16.
3
Stress Detection Using Wearable Physiological and Sociometric Sensors.
Int J Neural Syst. 2017 Mar;27(2):1650041. doi: 10.1142/S0129065716500416. Epub 2016 May 16.
4
Extraction of facial features as indicators of stress and anxiety.
Annu Int Conf IEEE Eng Med Biol Soc. 2015 Aug;2015:3711-4. doi: 10.1109/EMBC.2015.7319199.
5
Objective measures, sensors and computational techniques for stress recognition and classification: a survey.
Comput Methods Programs Biomed. 2012 Dec;108(3):1287-301. doi: 10.1016/j.cmpb.2012.07.003. Epub 2012 Aug 24.
6
Dynamic texture recognition using local binary patterns with an application to facial expressions.
IEEE Trans Pattern Anal Mach Intell. 2007 Jun;29(6):915-28. doi: 10.1109/TPAMI.2007.1110.
8
A global measure of perceived stress.
J Health Soc Behav. 1983 Dec;24(4):385-96.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验