• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

面向医学的可解释 AI 设计,用于从生理测量中预测压力。

Medically-oriented design for explainable AI for stress prediction from physiological measurements.

机构信息

Electrical and Computer Engineering Department, American University of Beirut, Beirut, Lebanon.

Pathfinding, Automation Technology and Analytics, Intel Corporation, Hillsboro, Oregon, USA.

出版信息

BMC Med Inform Decis Mak. 2022 Feb 11;22(1):38. doi: 10.1186/s12911-022-01772-2.

DOI:10.1186/s12911-022-01772-2
PMID:35148762
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8840288/
Abstract

BACKGROUND

In the last decade, a lot of attention has been given to develop artificial intelligence (AI) solutions for mental health using machine learning. To build trust in AI applications, it is crucial for AI systems to provide for practitioners and patients the reasons behind the AI decisions. This is referred to as Explainable AI. While there has been significant progress in developing stress prediction models, little work has been done to develop explainable AI for mental health.

METHODS

In this work, we address this gap by designing an explanatory AI report for stress prediction from wearable sensors. Because medical practitioners and patients are likely to be familiar with blood test reports, we modeled the look and feel of the explanatory AI on those of a standard blood test report. The report includes stress prediction and the physiological signals related to stressful episodes. In addition to the new design for explaining AI in mental health, the work includes the following contributions: Methods to automatically generate different components of the report, an approach for evaluating and validating the accuracies of the explanations, and a collection of ground truth of relationships between physiological measurements and stress prediction.

RESULTS

Test results showed that the explanations were consistent with ground truth. The reference intervals for stress versus non-stress were quite distinctive with little variation. In addition to the quantitative evaluations, a qualitative survey, conducted by three expert psychiatrists confirmed the usefulness of the explanation report in understanding the different aspects of the AI system.

CONCLUSION

In this work, we have provided a new design for explainable AI used in stress prediction based on physiological measurements. Based on the report, users and medical practitioners can determine what biological features have the most impact on the prediction of stress in addition to any health-related abnormalities. The effectiveness of the explainable AI report was evaluated using a quantitative and a qualitative assessment. The stress prediction accuracy was shown to be comparable to state-of-the-art. The contributions of each physiological signal to the stress prediction was shown to correlate with ground truth. In addition to these quantitative evaluations, a qualitative survey with psychiatrists confirmed the confidence and effectiveness of the explanation report in the stress made by the AI system. Future work includes the addition of more explanatory features related to other emotional states of the patient, such as sadness, relaxation, anxiousness, or happiness.

摘要

背景

在过去的十年中,人们非常关注使用机器学习为心理健康开发人工智能 (AI) 解决方案。为了建立对 AI 应用程序的信任,至关重要的是,AI 系统要为从业者和患者提供 AI 决策背后的原因。这被称为可解释 AI。虽然在开发压力预测模型方面已经取得了重大进展,但在开发心理健康方面的可解释 AI 方面所做的工作却很少。

方法

在这项工作中,我们通过设计用于从可穿戴传感器预测压力的可解释 AI 报告来解决这一差距。由于医疗从业者和患者可能熟悉血液检测报告,因此我们将可解释 AI 的外观和感觉建模为标准血液检测报告。该报告包括压力预测和与压力事件相关的生理信号。除了在心理健康方面解释 AI 的新设计外,这项工作还包括以下贡献:自动生成报告不同部分的方法、评估和验证解释准确性的方法以及收集生理测量与压力预测之间关系的真实数据。

结果

测试结果表明,解释与真实数据相符。压力与非压力的参考区间非常明显,变化很小。除了定量评估外,三位专家精神病医生进行的定性调查还证实了解释报告在理解 AI 系统不同方面的有用性。

结论

在这项工作中,我们提供了一种基于生理测量的用于压力预测的可解释 AI 的新设计。基于该报告,用户和医疗从业者可以确定哪些生物特征对压力预测的影响最大,除了任何与健康相关的异常之外。通过定量和定性评估来评估可解释 AI 报告的有效性。压力预测准确性与最先进的技术相当。每个生理信号对压力预测的贡献与真实数据相关。除了这些定量评估外,精神病医生的定性调查还证实了 AI 系统产生的压力解释报告的信心和有效性。未来的工作包括添加与患者其他情绪状态(如悲伤、放松、焦虑或快乐)相关的更多解释性特征。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/eca5d47b1ccd/12911_2022_1772_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/d8e689570d51/12911_2022_1772_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/67dd9461bfad/12911_2022_1772_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/359f03cf0d8c/12911_2022_1772_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/6350d0742da4/12911_2022_1772_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/24e5390fb8a4/12911_2022_1772_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/5d3a8d847c27/12911_2022_1772_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/e147698ab885/12911_2022_1772_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/eca5d47b1ccd/12911_2022_1772_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/d8e689570d51/12911_2022_1772_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/67dd9461bfad/12911_2022_1772_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/359f03cf0d8c/12911_2022_1772_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/6350d0742da4/12911_2022_1772_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/24e5390fb8a4/12911_2022_1772_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/5d3a8d847c27/12911_2022_1772_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/e147698ab885/12911_2022_1772_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ae7/8840288/eca5d47b1ccd/12911_2022_1772_Fig8_HTML.jpg

相似文献

1
Medically-oriented design for explainable AI for stress prediction from physiological measurements.面向医学的可解释 AI 设计,用于从生理测量中预测压力。
BMC Med Inform Decis Mak. 2022 Feb 11;22(1):38. doi: 10.1186/s12911-022-01772-2.
2
Medical Informatics in a Tension Between Black-Box AI and Trust.医疗信息学在黑盒 AI 与信任之间的紧张关系
Stud Health Technol Inform. 2022 Jan 14;289:41-44. doi: 10.3233/SHTI210854.
3
An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study.用于体重管理专家的可解释人工智能软件工具(PRIMO):混合方法研究。
J Med Internet Res. 2023 Sep 6;25:e42047. doi: 10.2196/42047.
4
A Mobile App That Addresses Interpretability Challenges in Machine Learning-Based Diabetes Predictions: Survey-Based User Study.一款应对基于机器学习的糖尿病预测中可解释性挑战的移动应用程序:基于调查的用户研究。
JMIR Form Res. 2023 Nov 13;7:e50328. doi: 10.2196/50328.
5
A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study.一种人机协作的机器学习方法用于患者安全事件报告的自动分类:算法开发与验证研究
JMIR Hum Factors. 2024 Jan 25;11:e53378. doi: 10.2196/53378.
6
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.可解释性在医疗保健人工智能可信性构建中的作用:术语、设计选择和评估策略的全面调查。
J Biomed Inform. 2021 Jan;113:103655. doi: 10.1016/j.jbi.2020.103655. Epub 2020 Dec 10.
7
Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal.基于 EEG 信号的中风预测可解释人工智能模型。
Sensors (Basel). 2022 Dec 15;22(24):9859. doi: 10.3390/s22249859.
8
ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.EXAID:一种用于皮肤损伤计算机辅助诊断的多模态解释框架。
Comput Methods Programs Biomed. 2022 Mar;215:106620. doi: 10.1016/j.cmpb.2022.106620. Epub 2022 Jan 5.
9
Explainable AI in medical imaging: An overview for clinical practitioners - Saliency-based XAI approaches.可解释人工智能在医学影像中的应用:临床医师的概述——基于显著度的 XAI 方法。
Eur J Radiol. 2023 May;162:110787. doi: 10.1016/j.ejrad.2023.110787. Epub 2023 Mar 21.
10
Evaluating the clinical utility of artificial intelligence assistance and its explanation on the glioma grading task.评估人工智能辅助在胶质瘤分级任务中的临床实用性及其解释。
Artif Intell Med. 2024 Feb;148:102751. doi: 10.1016/j.artmed.2023.102751. Epub 2024 Jan 2.

引用本文的文献

1
Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.使用可解释人工智能的个性化健康监测:弥合对预测性医疗保健的信任差距。
Sci Rep. 2025 Aug 29;15(1):31892. doi: 10.1038/s41598-025-15867-z.
2
Investigating the Key Trends in Applying Artificial Intelligence to Health Technologies: A Scoping Review.探究将人工智能应用于健康技术的关键趋势:一项范围综述
PLoS One. 2025 May 15;20(5):e0322197. doi: 10.1371/journal.pone.0322197. eCollection 2025.
3
Current methods in explainable artificial intelligence and future prospects for integrative physiology.

本文引用的文献

1
NeuroKit2: A Python toolbox for neurophysiological signal processing.NeuroKit2:一个用于神经生理信号处理的 Python 工具包。
Behav Res Methods. 2021 Aug;53(4):1689-1696. doi: 10.3758/s13428-020-01516-y. Epub 2021 Feb 2.
2
Stress detection using deep neural networks.使用深度神经网络进行压力检测。
BMC Med Inform Decis Mak. 2020 Dec 30;20(Suppl 11):285. doi: 10.1186/s12911-020-01299-4.
3
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.人工智能在医疗保健中的可解释性:多学科视角。
可解释人工智能的当前方法与整合生理学的未来前景。
Pflugers Arch. 2025 Apr;477(4):513-529. doi: 10.1007/s00424-025-03067-7. Epub 2025 Feb 25.
4
Quality of interaction between clinicians and artificial intelligence systems. A systematic review.临床医生与人工智能系统之间的交互质量。一项系统评价。
Future Healthc J. 2024 Aug 17;11(3):100172. doi: 10.1016/j.fhj.2024.100172. eCollection 2024 Sep.
5
Applying explainable artificial intelligence methods to models for diagnosing personal traits and cognitive abilities by social network data.应用可解释人工智能方法于通过社交网络数据诊断个人特质和认知能力的模型。
Sci Rep. 2024 Mar 4;14(1):5369. doi: 10.1038/s41598-024-56080-8.
6
Explainable artificial intelligence for mental health through transparency and interpretability for understandability.通过透明度和可解释性实现心理健康的可解释人工智能,以提高可理解性。
NPJ Digit Med. 2023 Jan 18;6(1):6. doi: 10.1038/s41746-023-00751-9.
BMC Med Inform Decis Mak. 2020 Nov 30;20(1):310. doi: 10.1186/s12911-020-01332-6.
4
COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images.COVID-Net:一种针对胸部 X 光图像中 COVID-19 病例检测的定制化深度卷积神经网络设计。
Sci Rep. 2020 Nov 11;10(1):19549. doi: 10.1038/s41598-020-76550-z.
5
A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare.面向医疗保健中机器学习模型预测解释的以用户为中心的显示设计的定性研究框架。
BMC Med Inform Decis Mak. 2020 Oct 8;20(1):257. doi: 10.1186/s12911-020-01276-x.
6
Explainable machine-learning predictions for the prevention of hypoxaemia during surgery.用于预防手术期间低氧血症的可解释机器学习预测。
Nat Biomed Eng. 2018 Oct;2(10):749-760. doi: 10.1038/s41551-018-0304-0. Epub 2018 Oct 10.
7
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.关于通过逐层相关性传播对非线性分类器决策进行逐像素解释
PLoS One. 2015 Jul 10;10(7):e0130140. doi: 10.1371/journal.pone.0130140. eCollection 2015.
8
Automatic Stress Detection in Working Environments From Smartphones' Accelerometer Data: A First Step.利用智能手机加速度计数据在工作环境中进行自动压力检测:第一步。
IEEE J Biomed Health Inform. 2016 Jul;20(4):1053-60. doi: 10.1109/JBHI.2015.2446195. Epub 2015 Jun 16.
9
Cardiorespiratory dynamic response to mental stress: a multivariate time-frequency analysis.心理应激时的心肺动力学反应:多变量时频分析。
Comput Math Methods Med. 2013;2013:451857. doi: 10.1155/2013/451857. Epub 2013 Dec 10.
10
Using activity-related behavioural features towards more effective automatic stress detection.利用与活动相关的行为特征实现更有效的自动压力检测。
PLoS One. 2012;7(9):e43571. doi: 10.1371/journal.pone.0043571. Epub 2012 Sep 19.