Vani M Sree, Sudhakar Rayapati Venkata, Mahendar A, Ledalla Sukanya, Radha Marepalli, Sunitha M
Department of CSE, BVRIT Hyderabad College of Engineering, For Women, 500090, Hyderabad, India.
Department of Computer science and Engineering, Geethanjali college of Engineering and Technology, Medchal District,Cheeryal, Hyderabad, 500043, Telangana, India.
Sci Rep. 2025 Aug 29;15(1):31892. doi: 10.1038/s41598-025-15867-z.
AI has propelled the potential for moving toward personalized health and early prediction of diseases. Unfortunately, a significant limitation of many of these deep learning models is that they are not interpretable, restricting their clinical utility and undermining trust by clinicians. However, all existing methods are non-informative because they report generic or post-hoc explanations, and few or none support patient-specific, accurate, individualized patient-level explanations. Furthermore, existing approaches are often restricted to static, limited-domain datasets and are not generalizable across various healthcare scenarios. To tackle these problems, we propose a new deep learning approach called PersonalCareNet for personalized health monitoring based on the MIMIC-III clinical dataset. Our system jointly models convolutional neural networks with attention (CHARMS) and employs SHAP (Shapley Additive exPlanations) to obtain global and patient-specific model interpretability. We believe the model, enabled to leverage many clinical features, would offer clinically interpretable insights into the contribution of features while supporting real-time risk prediction, thus increasing transparency and instilling clinically-oriented trust in the model. We provide an extensive evaluation that shows PersonalCareNet achieves 97.86% accuracy, exceeding multiple notable SoTA healthcare risk prediction models. Explainability at Both Local and Global Level The framework offers explainability at local (using various matrix heatmaps for diagnosing models, such as force plots, SHAP summary visualizations, and confusion matrix-based diagnostics) and also at a global level through feature importance plots and Top-N list visualizations. As a result, we show quantitative results, demonstrating that much of the improvement can be achieved without paying a high price for interpretability. We have proposed a cost-effective and systematic approach as an AI-based platform that is scalable, accurate, transparent, and interpretable for critical care and personalized diagnostics. PersonalCareNet, by filling the void between performance and interpretability, promises a significant advancement in the field of reliable and clinically validated predictive healthcare AI. The design allows for additional extension to multiple data types and real-time deployment at the edge, creating a broader impact and adaptability.
人工智能推动了迈向个性化医疗和疾病早期预测的可能性。不幸的是,许多这类深度学习模型的一个重大局限在于它们不可解释,这限制了它们的临床实用性,并削弱了临床医生的信任。然而,所有现有方法都缺乏信息性,因为它们给出的是一般性或事后的解释,很少或根本不支持针对患者的、准确的、个体化的患者层面解释。此外,现有方法通常局限于静态的、有限领域的数据集,无法在各种医疗场景中通用。为了解决这些问题,我们基于MIMIC-III临床数据集提出了一种名为PersonalCareNet 的新型深度学习方法用于个性化健康监测。我们的系统将带注意力机制的卷积神经网络(CHARMS)联合建模,并采用SHAP(Shapley值加法解释)来获得全局和针对患者的模型可解释性。我们相信,该模型能够利用许多临床特征,将在支持实时风险预测的同时,为特征贡献提供临床可解释的见解,从而提高透明度并在临床层面增强对该模型的信任。我们进行了广泛的评估,结果表明PersonalCareNet的准确率达到了97.86%,超过了多个著名的同类最佳医疗风险预测模型。 局部和全局层面均可解释 该框架在局部层面(使用各种矩阵热图来诊断模型,如力场图、SHAP摘要可视化以及基于混淆矩阵的诊断)和全局层面(通过特征重要性图和Top-N列表可视化)都提供可解释性。结果,我们展示了定量结果,表明在不付出高昂可解释性代价的情况下可以实现很大的改进。我们提出了一种经济高效且系统的方法,作为一个基于人工智能的平台,该平台具有可扩展性、准确性、透明度,并且适用于重症监护和个性化诊断且可解释。PersonalCareNet填补了性能与可解释性之间的空白,有望在可靠且经过临床验证的预测性医疗人工智能领域取得重大进展。该设计允许进一步扩展到多种数据类型并在边缘进行实时部署,从而产生更广泛的影响和适应性。
J Med Internet Res. 2024-12-24
Comput Methods Programs Biomed. 2025-6-21
Cochrane Database Syst Rev. 2022-5-20
Psychopharmacol Bull. 2024-7-8
IEEE J Biomed Health Inform. 2024-5
BMC Med Educ. 2023-9-22
Artif Intell Med. 2022-11
Comput Methods Programs Biomed. 2022-11
Comput Biol Med. 2022-10
Comput Struct Biotechnol J. 2022-4-19