Suppr超能文献

临床决策支持系统中的可解释人工智能:方法、应用及可用性挑战的元分析

Explainable AI in Clinical Decision Support Systems: A Meta-Analysis of Methods, Applications, and Usability Challenges.

作者信息

Abbas Qaiser, Jeong Woonyoung, Lee Seung Won

机构信息

Department of Electrical Engineering, Institute of Space Technology, Islamabad 44000, Pakistan.

Department of Metabiohealth, Institute for Cross-Disciplinary Studies, Sungkyunkwan University, Suwon 16419, Republic of Korea.

出版信息

Healthcare (Basel). 2025 Aug 29;13(17):2154. doi: 10.3390/healthcare13172154.

Abstract

Theintegration of artificial intelligence (AI) into clinical decision support systems (CDSSs) has significantly enhanced diagnostic precision, risk stratification, and treatment planning. AI models remain a barrier to clinical adoption, emphasizing the critical role of explainable AI (XAI). This systematic meta-analysis synthesizes findings from 62 peer-reviewed studies published between 2018 and 2025, examining the use of XAI methods within CDSSs across various clinical domains, including radiology, oncology, neurology, and critical care. Model-agnostic techniques such as visualization models like Gradient-weighted Class Activation Mapping (Grad-CAM) and attention mechanisms dominated in imaging and sequential data tasks. However, there are still gaps in user-friendly evaluation, methodological transparency, and ethical issues, as seen by the absence of research that evaluated explanation fidelity, clinician trust, or usability in real-world settings. In order to enable responsible AI implementation in healthcare, our analysis emphasizes the necessity of longitudinal clinical validation, participatory system design, and uniform interpretability measures. This review offers a thorough analysis of the state of XAI practices in CDSSs today, identifies methodological and practical issues, and suggests a path forward for AI solutions that are open, moral, and clinically relevant.

摘要

将人工智能(AI)集成到临床决策支持系统(CDSS)中,显著提高了诊断精度、风险分层和治疗规划水平。然而,AI模型在临床应用方面仍然存在障碍,这凸显了可解释人工智能(XAI)的关键作用。本系统荟萃分析综合了2018年至2025年间发表的62项同行评审研究的结果,考察了XAI方法在包括放射学、肿瘤学、神经病学和重症监护在内的各个临床领域的CDSS中的应用情况。诸如梯度加权类激活映射(Grad-CAM)等可视化模型和注意力机制等与模型无关的技术在成像和序列数据任务中占据主导地位。然而,在用户友好性评估、方法透明度和伦理问题方面仍存在差距,这体现在缺乏在现实环境中评估解释保真度、临床医生信任度或可用性的研究。为了在医疗保健中实现负责任的AI实施,我们的分析强调了纵向临床验证、参与式系统设计和统一可解释性措施的必要性。本综述对当今CDSS中XAI实践的现状进行了全面分析,识别了方法和实践问题,并为开放、道德且与临床相关的AI解决方案提出了前进方向。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b90/12427955/1f3a40d6a707/healthcare-13-02154-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验