Suppr超能文献

从以人为中心的角度评估胸部放射影像学中的可解释人工智能(XAI)技术。

Evaluating Explainable Artificial Intelligence (XAI) techniques in chest radiology imaging through a human-centered Lens.

机构信息

School of Computer Science and Digital Technologies, Aston University, Birmingham, United Kingdom.

Medical Imaging Department, University Hospital of Sharjah, Sharjah, United Arab Emirates.

出版信息

PLoS One. 2024 Oct 9;19(10):e0308758. doi: 10.1371/journal.pone.0308758. eCollection 2024.

Abstract

The field of radiology imaging has experienced a remarkable increase in using of deep learning (DL) algorithms to support diagnostic and treatment decisions. This rise has led to the development of Explainable AI (XAI) system to improve the transparency and trust of complex DL methods. However, XAI systems face challenges in gaining acceptance within the healthcare sector, mainly due to technical hurdles in utilizing these systems in practice and the lack of human-centered evaluation/validation. In this study, we focus on visual XAI systems applied to DL-enabled diagnostic system in chest radiography. In particular, we conduct a user study to evaluate two prominent visual XAI techniques from the human perspective. To this end, we created two clinical scenarios for diagnosing pneumonia and COVID-19 using DL techniques applied to chest X-ray and CT scans. The achieved accuracy rates were 90% for pneumonia and 98% for COVID-19. Subsequently, we employed two well-known XAI methods, Grad-CAM (Gradient-weighted Class Activation Mapping) and LIME (Local Interpretable Model-agnostic Explanations), to generate visual explanations elucidating the AI decision-making process. The visual explainability results were shared through a user study, undergoing evaluation by medical professionals in terms of clinical relevance, coherency, and user trust. In general, participants expressed a positive perception of the use of XAI systems in chest radiography. However, there was a noticeable lack of awareness regarding their value and practical aspects. Regarding preferences, Grad-CAM showed superior performance over LIME in terms of coherency and trust, although concerns were raised about its clinical usability. Our findings highlight key user-driven explainability requirements, emphasizing the importance of multi-modal explainability and the necessity to increase awareness of XAI systems among medical practitioners. Inclusive design was also identified as a crucial need to ensure better alignment of these systems with user needs.

摘要

放射影像学领域在使用深度学习(DL)算法来支持诊断和治疗决策方面取得了显著进展。这种趋势促使可解释人工智能(XAI)系统的发展,以提高复杂 DL 方法的透明度和信任度。然而,XAI 系统在医疗保健领域的接受度方面面临挑战,主要是由于在实践中使用这些系统的技术障碍以及缺乏以人为中心的评估/验证。在本研究中,我们专注于应用于胸部放射摄影的 DL 辅助诊断系统的可视 XAI 系统。特别是,我们从人类的角度评估了两种突出的可视 XAI 技术。为此,我们创建了两个临床场景,分别使用 DL 技术诊断肺炎和 COVID-19,使用的是胸部 X 光和 CT 扫描。肺炎的准确率为 90%,COVID-19 的准确率为 98%。随后,我们采用了两种著名的 XAI 方法,即 Grad-CAM(梯度加权类激活映射)和 LIME(局部可解释模型不可知解释),生成了阐明 AI 决策过程的可视化解释。通过用户研究共享了可视可解释性结果,由医疗专业人员评估其临床相关性、一致性和用户信任度。总的来说,参与者对 XAI 系统在胸部放射摄影中的使用持积极态度。然而,他们对其价值和实际方面的认识明显不足。在偏好方面,Grad-CAM 在一致性和信任方面优于 LIME,尽管有人对其临床可用性提出了担忧。我们的研究结果强调了用户驱动的可视可解释性需求的重要性,重点关注多模态可视可解释性以及提高医疗从业者对 XAI 系统的认识的必要性。包容性设计也被确定为确保这些系统与用户需求更好地保持一致的关键需求。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9487/11463756/3881d6691e62/pone.0308758.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验