文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

使用可解释人工智能的个性化健康监测:弥合对预测性医疗保健的信任差距。

Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.

作者信息

Vani M Sree, Sudhakar Rayapati Venkata, Mahendar A, Ledalla Sukanya, Radha Marepalli, Sunitha M

机构信息

Department of CSE, BVRIT Hyderabad College of Engineering, For Women, 500090, Hyderabad, India.

Department of Computer science and Engineering, Geethanjali college of Engineering and Technology, Medchal District,Cheeryal, Hyderabad, 500043, Telangana, India.

出版信息

Sci Rep. 2025 Aug 29;15(1):31892. doi: 10.1038/s41598-025-15867-z.


DOI:10.1038/s41598-025-15867-z
PMID:40883377
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12397250/
Abstract

AI has propelled the potential for moving toward personalized health and early prediction of diseases. Unfortunately, a significant limitation of many of these deep learning models is that they are not interpretable, restricting their clinical utility and undermining trust by clinicians. However, all existing methods are non-informative because they report generic or post-hoc explanations, and few or none support patient-specific, accurate, individualized patient-level explanations. Furthermore, existing approaches are often restricted to static, limited-domain datasets and are not generalizable across various healthcare scenarios. To tackle these problems, we propose a new deep learning approach called PersonalCareNet for personalized health monitoring based on the MIMIC-III clinical dataset. Our system jointly models convolutional neural networks with attention (CHARMS) and employs SHAP (Shapley Additive exPlanations) to obtain global and patient-specific model interpretability. We believe the model, enabled to leverage many clinical features, would offer clinically interpretable insights into the contribution of features while supporting real-time risk prediction, thus increasing transparency and instilling clinically-oriented trust in the model. We provide an extensive evaluation that shows PersonalCareNet achieves 97.86% accuracy, exceeding multiple notable SoTA healthcare risk prediction models. Explainability at Both Local and Global Level The framework offers explainability at local (using various matrix heatmaps for diagnosing models, such as force plots, SHAP summary visualizations, and confusion matrix-based diagnostics) and also at a global level through feature importance plots and Top-N list visualizations. As a result, we show quantitative results, demonstrating that much of the improvement can be achieved without paying a high price for interpretability. We have proposed a cost-effective and systematic approach as an AI-based platform that is scalable, accurate, transparent, and interpretable for critical care and personalized diagnostics. PersonalCareNet, by filling the void between performance and interpretability, promises a significant advancement in the field of reliable and clinically validated predictive healthcare AI. The design allows for additional extension to multiple data types and real-time deployment at the edge, creating a broader impact and adaptability.

摘要

人工智能推动了迈向个性化医疗和疾病早期预测的可能性。不幸的是,许多这类深度学习模型的一个重大局限在于它们不可解释,这限制了它们的临床实用性,并削弱了临床医生的信任。然而,所有现有方法都缺乏信息性,因为它们给出的是一般性或事后的解释,很少或根本不支持针对患者的、准确的、个体化的患者层面解释。此外,现有方法通常局限于静态的、有限领域的数据集,无法在各种医疗场景中通用。为了解决这些问题,我们基于MIMIC-III临床数据集提出了一种名为PersonalCareNet 的新型深度学习方法用于个性化健康监测。我们的系统将带注意力机制的卷积神经网络(CHARMS)联合建模,并采用SHAP(Shapley值加法解释)来获得全局和针对患者的模型可解释性。我们相信,该模型能够利用许多临床特征,将在支持实时风险预测的同时,为特征贡献提供临床可解释的见解,从而提高透明度并在临床层面增强对该模型的信任。我们进行了广泛的评估,结果表明PersonalCareNet的准确率达到了97.86%,超过了多个著名的同类最佳医疗风险预测模型。 局部和全局层面均可解释 该框架在局部层面(使用各种矩阵热图来诊断模型,如力场图、SHAP摘要可视化以及基于混淆矩阵的诊断)和全局层面(通过特征重要性图和Top-N列表可视化)都提供可解释性。结果,我们展示了定量结果,表明在不付出高昂可解释性代价的情况下可以实现很大的改进。我们提出了一种经济高效且系统的方法,作为一个基于人工智能的平台,该平台具有可扩展性、准确性、透明度,并且适用于重症监护和个性化诊断且可解释。PersonalCareNet填补了性能与可解释性之间的空白,有望在可靠且经过临床验证的预测性医疗人工智能领域取得重大进展。该设计允许进一步扩展到多种数据类型并在边缘进行实时部署,从而产生更广泛的影响和适应性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/f17edb2bd85d/41598_2025_15867_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/be0e9e43b155/41598_2025_15867_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/f53fdb878026/41598_2025_15867_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/fe704081155e/41598_2025_15867_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/62bd4805c657/41598_2025_15867_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/dc2e37d17eaa/41598_2025_15867_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/b5e21546e992/41598_2025_15867_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/f3912499d28d/41598_2025_15867_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/7aab0ec80a6d/41598_2025_15867_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/da01237a767a/41598_2025_15867_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/742f8ddd8627/41598_2025_15867_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/33f1d452799f/41598_2025_15867_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/dd89a378140c/41598_2025_15867_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/1569504f71c0/41598_2025_15867_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/8e6928048d25/41598_2025_15867_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/3000ee4b620d/41598_2025_15867_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/f17edb2bd85d/41598_2025_15867_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/be0e9e43b155/41598_2025_15867_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/f53fdb878026/41598_2025_15867_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/fe704081155e/41598_2025_15867_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/62bd4805c657/41598_2025_15867_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/dc2e37d17eaa/41598_2025_15867_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/b5e21546e992/41598_2025_15867_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/f3912499d28d/41598_2025_15867_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/7aab0ec80a6d/41598_2025_15867_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/da01237a767a/41598_2025_15867_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/742f8ddd8627/41598_2025_15867_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/33f1d452799f/41598_2025_15867_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/dd89a378140c/41598_2025_15867_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/1569504f71c0/41598_2025_15867_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/8e6928048d25/41598_2025_15867_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/3000ee4b620d/41598_2025_15867_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77e0/12397250/f17edb2bd85d/41598_2025_15867_Fig13_HTML.jpg

相似文献

[1]
Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.

Sci Rep. 2025-8-29

[2]
Prescription of Controlled Substances: Benefits and Risks

2025-1

[3]
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.

J Med Internet Res. 2024-12-24

[4]
A Responsible Framework for Assessing, Selecting, and Explaining Machine Learning Models in Cardiovascular Disease Outcomes Among People With Type 2 Diabetes: Methodology and Validation Study.

JMIR Med Inform. 2025-6-27

[5]
CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.

Sci Rep. 2025-8-31

[6]
Sexual Harassment and Prevention Training

2025-1

[7]
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.

Comput Methods Programs Biomed. 2025-6-21

[8]
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.

Cochrane Database Syst Rev. 2022-5-20

[9]
The Black Book of Psychotropic Dosing and Monitoring.

Psychopharmacol Bull. 2024-7-8

[10]
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.

Clin Orthop Relat Res. 2024-12-1

本文引用的文献

[1]
Pathways to democratized healthcare: Envisioning human-centered AI-as-a-service for customized diagnosis and rehabilitation.

Artif Intell Med. 2024-5

[2]
AI-driven decision support systems and epistemic reliance: a qualitative study on obstetricians' and midwives' perspectives on integrating AI-driven CTG into clinical decision making.

BMC Med Ethics. 2024-1-6

[3]
Exploring Nutritional Influence on Blood Glucose Forecasting for Type 1 Diabetes Using Explainable AI.

IEEE J Biomed Health Inform. 2024-5

[4]
Revolutionizing healthcare: the role of artificial intelligence in clinical practice.

BMC Med Educ. 2023-9-22

[5]
A manifesto on explainability for artificial intelligence in medicine.

Artif Intell Med. 2022-11

[6]
Explainable AI for clinical and remote health applications: a survey on tabular and time series data.

Artif Intell Rev. 2023

[7]
Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022).

Comput Methods Programs Biomed. 2022-11

[8]
Explainable, trustworthy, and ethical machine learning for healthcare: A survey.

Comput Biol Med. 2022-10

[9]
On the road to explainable AI in drug-drug interactions prediction: A systematic review.

Comput Struct Biotechnol J. 2022-4-19

[10]
Rapid triage for ischemic stroke: a machine learning-driven approach in the context of predictive, preventive and personalised medicine.

EPMA J. 2022-5-27

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索