• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于心理健康治疗预测的具有概率综合评分的可解释模型:设计研究。

An Interpretable Model With Probabilistic Integrated Scoring for Mental Health Treatment Prediction: Design Study.

作者信息

Kelly Anthony, Jensen Esben Kjems, Grua Eoin Martino, Mathiasen Kim, Van de Ven Pepijn

机构信息

Department of Electronic and Computer Engineering, University of Limerick, Limerick, Ireland.

Health Research Institute, University of Limerick, Limerick, Ireland.

出版信息

JMIR Med Inform. 2025 Mar 26;13:e64617. doi: 10.2196/64617.

DOI:10.2196/64617
PMID:40138679
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11982765/
Abstract

BACKGROUND

Machine learning (ML) systems in health care have the potential to enhance decision-making but often fail to address critical issues such as prediction explainability, confidence, and robustness in a context-based and easily interpretable manner.

OBJECTIVE

This study aimed to design and evaluate an ML model for a future decision support system for clinical psychopathological treatment assessments. The novel ML model is inherently interpretable and transparent. It aims to enhance clinical explainability and trust through a transparent, hierarchical model structure that progresses from questions to scores to classification predictions. The model confidence and robustness were addressed by applying Monte Carlo dropout, a probabilistic method that reveals model uncertainty and confidence.

METHODS

A model for clinical psychopathological treatment assessments was developed, incorporating a novel ML model structure. The model aimed at enhancing the graphical interpretation of the model outputs and addressing issues of prediction explainability, confidence, and robustness. The proposed ML model was trained and validated using patient questionnaire answers and demographics from a web-based treatment service in Denmark (N=1088).

RESULTS

The balanced accuracy score on the test set was 0.79. The precision was ≥0.71 for all 4 prediction classes (depression, panic, social phobia, and specific phobia). The area under the curve for the 4 classes was 0.93, 0.92, 0.91, and 0.98, respectively.

CONCLUSIONS

We have demonstrated a mental health treatment ML model that supported a graphical interpretation of prediction class probability distributions. Their spread and overlap can inform clinicians of competing treatment possibilities for patients and uncertainty in treatment predictions. With the ML model achieving 79% balanced accuracy, we expect that the model will be clinically useful in both screening new patients and informing clinical interviews.

摘要

背景

医疗保健中的机器学习(ML)系统有潜力改善决策,但往往未能以基于上下文且易于解释的方式解决诸如预测可解释性、置信度和稳健性等关键问题。

目的

本研究旨在为临床精神病理学治疗评估的未来决策支持系统设计并评估一个ML模型。这个新颖的ML模型本质上是可解释且透明的。它旨在通过一个从问题到分数再到分类预测的透明分层模型结构来提高临床可解释性和可信度。通过应用蒙特卡洛随机失活(一种揭示模型不确定性和置信度的概率方法)来解决模型的置信度和稳健性问题。

方法

开发了一个用于临床精神病理学治疗评估的模型,纳入了一种新颖的ML模型结构。该模型旨在增强模型输出的图形化解释,并解决预测可解释性、置信度和稳健性问题。使用来自丹麦一个基于网络的治疗服务机构的患者问卷答案和人口统计学数据(N = 1088)对所提出的ML模型进行训练和验证。

结果

测试集上的平衡准确率得分是0.79。所有4个预测类别(抑郁症、恐慌症、社交恐惧症和特定恐惧症)的精确率均≥0.71。这4个类别的曲线下面积分别为0.93、0.92、0.91和0.98。

结论

我们展示了一个心理健康治疗ML模型,它支持对预测类别概率分布进行图形化解释。它们的分布和重叠可以让临床医生了解患者的多种治疗可能性以及治疗预测中的不确定性。鉴于该ML模型达到了79%的平衡准确率,我们预计该模型在筛查新患者和为临床访谈提供信息方面都将具有临床实用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/49776715430e/medinform_v13i1e64617_fig20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/6d6db1e51db2/medinform_v13i1e64617_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/03b97ea27710/medinform_v13i1e64617_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/3fbb8b91a294/medinform_v13i1e64617_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/2a129b07e9f4/medinform_v13i1e64617_fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/f1f658900d8e/medinform_v13i1e64617_fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/491f41266eac/medinform_v13i1e64617_fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/0fc27fb7052d/medinform_v13i1e64617_fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/1773aef420bc/medinform_v13i1e64617_fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/a50545653ad0/medinform_v13i1e64617_fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/5fbc6ef3ab92/medinform_v13i1e64617_fig10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/213f39ccaba3/medinform_v13i1e64617_fig11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/7906fda873b8/medinform_v13i1e64617_fig12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/2c776f79c58f/medinform_v13i1e64617_fig13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/872156ebfaa4/medinform_v13i1e64617_fig14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/3fbb8b91a294/medinform_v13i1e64617_fig15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/e9ae60cbb87f/medinform_v13i1e64617_fig16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/4cd4f818fffc/medinform_v13i1e64617_fig17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/ec55def5e503/medinform_v13i1e64617_fig18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/c10b2afe0559/medinform_v13i1e64617_fig19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/49776715430e/medinform_v13i1e64617_fig20.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/6d6db1e51db2/medinform_v13i1e64617_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/03b97ea27710/medinform_v13i1e64617_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/3fbb8b91a294/medinform_v13i1e64617_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/2a129b07e9f4/medinform_v13i1e64617_fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/f1f658900d8e/medinform_v13i1e64617_fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/491f41266eac/medinform_v13i1e64617_fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/0fc27fb7052d/medinform_v13i1e64617_fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/1773aef420bc/medinform_v13i1e64617_fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/a50545653ad0/medinform_v13i1e64617_fig9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/5fbc6ef3ab92/medinform_v13i1e64617_fig10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/213f39ccaba3/medinform_v13i1e64617_fig11.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/7906fda873b8/medinform_v13i1e64617_fig12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/2c776f79c58f/medinform_v13i1e64617_fig13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/872156ebfaa4/medinform_v13i1e64617_fig14.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/3fbb8b91a294/medinform_v13i1e64617_fig15.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/e9ae60cbb87f/medinform_v13i1e64617_fig16.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/4cd4f818fffc/medinform_v13i1e64617_fig17.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/ec55def5e503/medinform_v13i1e64617_fig18.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/c10b2afe0559/medinform_v13i1e64617_fig19.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5c67/11982765/49776715430e/medinform_v13i1e64617_fig20.jpg

相似文献

1
An Interpretable Model With Probabilistic Integrated Scoring for Mental Health Treatment Prediction: Design Study.一种用于心理健康治疗预测的具有概率综合评分的可解释模型:设计研究。
JMIR Med Inform. 2025 Mar 26;13:e64617. doi: 10.2196/64617.
2
An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study.用于体重管理专家的可解释人工智能软件工具(PRIMO):混合方法研究。
J Med Internet Res. 2023 Sep 6;25:e42047. doi: 10.2196/42047.
3
Investigating Protective and Risk Factors and Predictive Insights for Aboriginal Perinatal Mental Health: Explainable Artificial Intelligence Approach.探究原住民围产期心理健康的保护因素、风险因素及预测性见解:可解释人工智能方法
J Med Internet Res. 2025 Apr 30;27:e68030. doi: 10.2196/68030.
4
Towards clinical prediction with transparency: An explainable AI approach to survival modelling in residential aged care.迈向具有透明度的临床预测:一种用于老年护理机构生存建模的可解释人工智能方法。
Comput Methods Programs Biomed. 2025 May;263:108653. doi: 10.1016/j.cmpb.2025.108653. Epub 2025 Feb 15.
5
Development and Feasibility Study of HOPE Model for Prediction of Depression Among Older Adults Using Wi-Fi-based Motion Sensor Data: Machine Learning Study.基于Wi-Fi的运动传感器数据预测老年人抑郁症的HOPE模型的开发与可行性研究:机器学习研究
JMIR Aging. 2025 Mar 3;8:e67715. doi: 10.2196/67715.
6
Responsible AI for cardiovascular disease detection: Towards a privacy-preserving and interpretable model.心血管疾病检测的负责任 AI:迈向隐私保护和可解释的模型。
Comput Methods Programs Biomed. 2024 Sep;254:108289. doi: 10.1016/j.cmpb.2024.108289. Epub 2024 Jun 17.
7
An innovative artificial intelligence-based method to compress complex models into explainable, model-agnostic and reduced decision support systems with application to healthcare (NEAR).一种创新的基于人工智能的方法,可将复杂模型压缩为可解释的、与模型无关的和简化的决策支持系统,并应用于医疗保健领域(NEAR)。
Artif Intell Med. 2024 May;151:102841. doi: 10.1016/j.artmed.2024.102841. Epub 2024 Mar 12.
8
An Advanced Machine Learning Model for a Web-Based Artificial Intelligence-Based Clinical Decision Support System Application: Model Development and Validation Study.基于人工智能的临床决策支持系统的基于网络的人工智能临床决策支持系统应用的高级机器学习模型:模型开发和验证研究。
J Med Internet Res. 2024 Sep 4;26:e56022. doi: 10.2196/56022.
9
COVID-Net Biochem: an explainability-driven framework to building machine learning models for predicting survival and kidney injury of COVID-19 patients from clinical and biochemistry data.COVID-Net 生化:一个基于可解释性的框架,用于构建基于临床和生化数据预测 COVID-19 患者生存和肾脏损伤的机器学习模型。
Sci Rep. 2023 Oct 9;13(1):17001. doi: 10.1038/s41598-023-42203-0.
10
A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study.一种人机协作的机器学习方法用于患者安全事件报告的自动分类:算法开发与验证研究
JMIR Hum Factors. 2024 Jan 25;11:e53378. doi: 10.2196/53378.

本文引用的文献

1
Artificial intelligence in positive mental health: a narrative review.人工智能在积极心理健康中的应用:一项叙述性综述。
Front Digit Health. 2024 Mar 18;6:1280235. doi: 10.3389/fdgth.2024.1280235. eCollection 2024.
2
Is AI the Future of Mental Healthcare?人工智能会成为精神卫生保健的未来吗?
Topoi (Dordr). 2023 May 31;42(3):1-9. doi: 10.1007/s11245-023-09932-3.
3
Survey of Explainable AI Techniques in Healthcare.医疗保健领域可解释人工智能技术调查。
Sensors (Basel). 2023 Jan 5;23(2):634. doi: 10.3390/s23020634.
4
SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability.SkiNet:一种具有不确定性估计和可解释性的皮肤病变诊断深度学习框架。
PLoS One. 2022 Oct 31;17(10):e0276836. doi: 10.1371/journal.pone.0276836. eCollection 2022.
5
Clinically sufficient classification accuracy and key predictors of treatment failure in a randomized controlled trial of Internet-delivered Cognitive Behavior Therapy for Insomnia.一项针对失眠症的互联网认知行为疗法随机对照试验中的临床充分分类准确率及治疗失败的关键预测因素
Internet Interv. 2022 Jun 25;29:100554. doi: 10.1016/j.invent.2022.100554. eCollection 2022 Sep.
6
The Current State and Validity of Digital Assessment Tools for Psychiatry: Systematic Review.精神病学数字评估工具的现状与有效性:系统评价
JMIR Ment Health. 2022 Mar 30;9(3):e32824. doi: 10.2196/32824.
7
The false hope of current approaches to explainable artificial intelligence in health care.当前医疗保健中可解释人工智能方法的虚假希望。
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
8
Second opinion needed: communicating uncertainty in medical machine learning.需要第二种观点:传达医学机器学习中的不确定性
NPJ Digit Med. 2021 Jan 5;4(1):4. doi: 10.1038/s41746-020-00367-3.
9
AI in mental health.人工智能在心理健康领域的应用。
Curr Opin Psychol. 2020 Dec;36:112-117. doi: 10.1016/j.copsyc.2020.04.005. Epub 2020 Jun 3.
10
Using Generalized Anxiety Disorder-2 (GAD-2) and GAD-7 in a Primary Care Setting.在初级保健环境中使用广泛性焦虑障碍量表-2(GAD-2)和广泛性焦虑障碍量表-7(GAD-7)
Cureus. 2020 May 21;12(5):e8224. doi: 10.7759/cureus.8224.