• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用声学、言语、视觉和生理数据对连续量表上表达的应激严重程度进行机器学习检测:经验教训。

Machine-learning detection of stress severity expressed on a continuous scale using acoustic, verbal, visual, and physiological data: lessons learned.

作者信息

Ciharova Marketa, Amarti Khadicha, van Breda Ward, Gevonden Martin J, Ghassemi Sina, Kleiboer Annet, Vinkers Christiaan H, Sep Milou S C, Trofimova Sophia, Cooper Alexander C, Peng Xianhua, Schulte Mieke, Karyotaki Eirini, Cuijpers Pim, Riper Heleen

机构信息

Department of Clinical, Neuro- and Developmental Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.

Department of Computer Science, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.

出版信息

Front Psychiatry. 2025 Jun 13;16:1548287. doi: 10.3389/fpsyt.2025.1548287. eCollection 2025.

DOI:10.3389/fpsyt.2025.1548287
PMID:40585547
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12203116/
Abstract

BACKGROUND

Early detection of elevated acute stress is necessary if we aim to reduce consequences associated with prolonged or recurrent stress exposure. Stress monitoring may be supported by valid and reliable machine-learning algorithms. However, investigation of algorithms detecting stress severity on a continuous scale is missing due to high demands on data quality for such analyses. Use of multimodal data, meaning data coming from multiple sources, might contribute to machine-learning stress severity detection. We aimed to detect laboratory-induced stress using multimodal data and identify challenges researchers may encounter when conducting a similar study.

METHODS

We conducted a preliminary exploration of performance of a machine-learning algorithm trained on multimodal data, namely visual, acoustic, verbal, and physiological features, in its ability to detect stress severity following a partially automated online version of the Trier Social Stress Test. College students ( = 42; age = 20.79, 69% female) completed a self-reported stress visual analogue scale at five time-points: After the initial resting period (P1), during the three stress-inducing tasks (i.e., preparation for a presentation, a presentation task, and an arithmetic task, P2-4) and after a recovery period (P5). For the whole duration of the experiment, we recorded the participants' voice and facial expressions by a video camera and measured cardiovascular and electrodermal physiology by an ambulatory monitoring system. Then, we evaluated the performance of the algorithm in detection of stress severity using 3 combinations of visual, acoustic, verbal, and physiological data collected at each of the periods of the experiment (P1-5).

RESULTS

Participants reported minimal (P1, = 21.79, = 17.45) to moderate stress severity (P2, = 47.95, = 15.92), depending on the period at hand. We found a very weak association between the detected and observed scores ( = .154; = .021). In our analysis, we classified participants into categories of stressed and non-stressed individuals. When applying all available features (i.e., visual, acoustic, verbal, and physiological), or a combination of visual, acoustic and verbal features, performance ranged from acceptable to good, but only for the presentation task (accuracy up to.71, F1-score up to.73).

CONCLUSIONS

The complexity of input features needed for machine-learning detection of stress severity based on multimodal data requires large sample sizes with wide variability of stress reactions and inputs among participants. These are difficult to recruit for laboratory setting, due to high time and effort demands on the side of both researcher and participant. Resources needed may be decreased using automatization of experimental procedures, which may, however, lead to additional technological challenges, potentially causing other recruitment setbacks. Further investigation is necessary, with the emphasis on quality ground truth, i.e., gold standard (self-report) instruments, but also outside of laboratory experiments, mainly in general populations and mental health care patients.

摘要

背景

如果我们旨在减少与长期或反复应激暴露相关的后果,那么早期发现急性应激升高是必要的。有效的和可靠的机器学习算法可能有助于应激监测。然而,由于此类分析对数据质量要求很高,目前缺少对能够在连续尺度上检测应激严重程度的算法的研究。使用多模态数据,即来自多个来源的数据,可能有助于机器学习对应激严重程度的检测。我们旨在使用多模态数据检测实验室诱导的应激,并确定研究人员在进行类似研究时可能遇到的挑战。

方法

我们对一种基于多模态数据(即视觉、听觉、言语和生理特征)训练的机器学习算法在检测应激严重程度方面的性能进行了初步探索,该算法用于检测在部分自动化的在线版特里尔社会应激测试后的应激严重程度。大学生(n = 42;年龄 = 20.79岁,69%为女性)在五个时间点完成了一份自我报告的应激视觉模拟量表:初始休息期后(P1)、在三项应激诱导任务期间(即准备演讲、演讲任务和算术任务,P2 - 4)以及恢复期后(P5)。在整个实验过程中,我们用摄像机记录了参与者的声音和面部表情,并通过动态监测系统测量了心血管和皮肤电生理指标。然后,我们使用在实验的每个阶段(P1 - 5)收集的视觉、听觉、言语和生理数据的3种组合来评估该算法在检测应激严重程度方面的性能。

结果

根据所处阶段不同,参与者报告的应激严重程度从最小(P1,M = 21.79,SD = 17.45)到中等(P2,M = 47.95,SD = 15.92)。我们发现检测分数与观察分数之间的关联非常弱(r = 0.154;p = 0.021)。在我们的分析中,我们将参与者分为应激个体和非应激个体类别。当应用所有可用特征(即视觉、听觉、言语和生理特征),或视觉、听觉和言语特征的组合时,性能范围从可接受到良好,但仅适用于演讲任务(准确率高达0.71,F1分数高达0.73)。

结论

基于多模态数据的机器学习检测应激严重程度所需的输入特征很复杂,这需要大量样本,且参与者之间的应激反应和输入要有广泛的变异性。由于研究人员和参与者都需要投入大量时间和精力,在实验室环境中很难招募到这样的样本。使用实验程序自动化可能会减少所需资源,但这可能会带来额外的技术挑战,潜在地导致其他招募方面的挫折。有必要进行进一步的研究,重点是高质量的基本事实,即金标准(自我报告)工具,同时也要在实验室实验之外进行研究,主要针对普通人群和精神卫生保健患者。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d6f4/12203116/570cfa10cd42/fpsyt-16-1548287-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d6f4/12203116/570cfa10cd42/fpsyt-16-1548287-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d6f4/12203116/570cfa10cd42/fpsyt-16-1548287-g001.jpg

相似文献

1
Machine-learning detection of stress severity expressed on a continuous scale using acoustic, verbal, visual, and physiological data: lessons learned.使用声学、言语、视觉和生理数据对连续量表上表达的应激严重程度进行机器学习检测:经验教训。
Front Psychiatry. 2025 Jun 13;16:1548287. doi: 10.3389/fpsyt.2025.1548287. eCollection 2025.
2
Falls prevention interventions for community-dwelling older adults: systematic review and meta-analysis of benefits, harms, and patient values and preferences.社区居住的老年人跌倒预防干预措施:系统评价和荟萃分析的益处、危害以及患者的价值观和偏好。
Syst Rev. 2024 Nov 26;13(1):289. doi: 10.1186/s13643-024-02681-3.
3
Drugs for preventing postoperative nausea and vomiting in adults after general anaesthesia: a network meta-analysis.成人全身麻醉后预防术后恶心呕吐的药物:网状Meta分析
Cochrane Database Syst Rev. 2020 Oct 19;10(10):CD012859. doi: 10.1002/14651858.CD012859.pub2.
4
Measures implemented in the school setting to contain the COVID-19 pandemic.学校为控制 COVID-19 疫情而采取的措施。
Cochrane Database Syst Rev. 2022 Jan 17;1(1):CD015029. doi: 10.1002/14651858.CD015029.
5
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.在基层医疗机构或医院门诊环境中,如果患者出现以下症状和体征,可判断其是否患有 COVID-19。
Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3.
6
Education support services for improving school engagement and academic performance of children and adolescents with a chronic health condition.改善患有慢性病的儿童和青少年的学校参与度和学业成绩的教育支持服务。
Cochrane Database Syst Rev. 2023 Feb 8;2(2):CD011538. doi: 10.1002/14651858.CD011538.pub2.
7
Electronic cigarettes for smoking cessation.电子烟戒烟。
Cochrane Database Syst Rev. 2021 Sep 14;9(9):CD010216. doi: 10.1002/14651858.CD010216.pub6.
8
Systemic pharmacological treatments for chronic plaque psoriasis: a network meta-analysis.慢性斑块状银屑病的全身药理学治疗:一项网状Meta分析。
Cochrane Database Syst Rev. 2020 Jan 9;1(1):CD011535. doi: 10.1002/14651858.CD011535.pub3.
9
Unconditional cash transfers for reducing poverty and vulnerabilities: effect on use of health services and health outcomes in low- and middle-income countries.无条件现金转移以减少贫困和脆弱性:对中低收入国家卫生服务利用和健康结果的影响。
Cochrane Database Syst Rev. 2022 Mar 29;3(3):CD011135. doi: 10.1002/14651858.CD011135.pub3.
10
Electronic cigarettes for smoking cessation.用于戒烟的电子烟。
Cochrane Database Syst Rev. 2025 Jan 29;1(1):CD010216. doi: 10.1002/14651858.CD010216.pub9.

本文引用的文献

1
Speech production under stress for machine learning: multimodal dataset of 79 cases and 8 signals.应激状态下的言语产生用于机器学习:79 例 8 信号的多模态数据集。
Sci Data. 2024 Nov 12;11(1):1221. doi: 10.1038/s41597-024-03991-w.
2
Heart rate variability measurement and influencing factors: Towards the standardization of methodology.心率变异性测量及影响因素:迈向方法学的标准化
Glob Cardiol Sci Pract. 2024 Aug 1;2024(4):e202435. doi: 10.21542/gcsp.2024.35.
3
Predicting stress levels using physiological data: Real-time stress prediction models utilizing wearable devices.
利用生理数据预测压力水平:使用可穿戴设备的实时压力预测模型。
AIMS Neurosci. 2024 Apr 19;11(2):76-102. doi: 10.3934/Neuroscience.2024006. eCollection 2024.
4
Use of Machine Learning Algorithms Based on Text, Audio, and Video Data in the Prediction of Anxiety and Posttraumatic Stress in General and Clinical Populations: A Systematic Review.基于文本、音频和视频数据的机器学习算法在普通人群和临床人群焦虑及创伤后应激预测中的应用:一项系统综述。
Biol Psychiatry. 2024 Oct 1;96(7):519-531. doi: 10.1016/j.biopsych.2024.06.002. Epub 2024 Jun 10.
5
Personalized Stress Detection Using Biosignals from Wearables: A Scoping Review.使用可穿戴设备的生物信号进行个性化压力检测:范围综述。
Sensors (Basel). 2024 May 18;24(10):3221. doi: 10.3390/s24103221.
6
Digital Phenotyping for Stress, Anxiety, and Mild Depression: Systematic Literature Review.数字化表型用于压力、焦虑和轻度抑郁的评估:系统文献综述。
JMIR Mhealth Uhealth. 2024 May 23;12:e40689. doi: 10.2196/40689.
7
Biosensors for psychiatric biomarkers in mental health monitoring.用于精神健康监测中心理精神生物标志物的生物传感器。
Biosens Bioelectron. 2024 Jul 15;256:116242. doi: 10.1016/j.bios.2024.116242. Epub 2024 Mar 29.
8
Digital Phenotyping: Data-Driven Psychiatry to Redefine Mental Health.数字表型:以数据为驱动的精神病学重新定义精神健康。
J Med Internet Res. 2023 Oct 4;25:e44502. doi: 10.2196/44502.
9
Seven reasons why binary diagnostic categories should be replaced with empirically sounder and less stigmatizing dimensions.用更具实证依据且较少污名化的维度取代二元诊断类别的七个原因。
JCPP Adv. 2022 Oct 9;2(4):e12108. doi: 10.1002/jcv2.12108. eCollection 2022 Dec.
10
Stress Detection Through Wrist-Based Electrodermal Activity Monitoring and Machine Learning.基于腕部皮肤电活动监测和机器学习的压力检测。
IEEE J Biomed Health Inform. 2023 May;27(5):2155-2165. doi: 10.1109/JBHI.2023.3239305. Epub 2023 May 4.