用于医学成像中深度学习模型可视化的可解释人工智能(XAI)技术综述。

A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging.

作者信息

Bhati Deepshikha, Neha Fnu, Amiruzzaman Md

机构信息

Department of Computer Science, Kent State University, Kent, OH 44242, USA.

Department of Computer Science, West Chester University, West Chester, PA 19383, USA.

出版信息

J Imaging. 2024 Sep 25;10(10):239. doi: 10.3390/jimaging10100239.

Abstract

The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis.

摘要

医学成像与深度学习的结合显著提升了医疗领域的诊断和预后能力。然而,深度学习模型固有的复杂性给理解其决策过程带来了挑战。可解释性和可视化技术已成为揭示这些模型黑箱性质的关键工具,为深入了解其内部运作提供见解,并增强对其预测的信任。这篇综述论文全面研究了应用于医学成像深度学习模型的各种解释和可视化技术。论文回顾了相关方法,讨论了它们的应用,并评估了它们在增强深度学习模型在医学图像分析中的可解释性、可靠性和临床相关性方面的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f1a/11508748/84b1c32e9406/jimaging-10-00239-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索