Gallée Luisa, Kniesel Hannah, Ropinski Timo, Götz Michael
Division of Experimental Radiology, Department for Diagnostic and Interventional Radiology, University Ulm Medical Centre, Ulm, Germany.
Visual Computing, University of Ulm, Germany.
Rofo. 2023 Sep;195(9):797-803. doi: 10.1055/a-2076-6736. Epub 2023 May 9.
BACKGROUND: Artificial intelligence is playing an increasingly important role in radiology. However, more and more often it is no longer possible to reconstruct decisions, especially in the case of new and powerful methods from the field of deep learning. The resulting models fulfill their function without the users being able to understand the internal processes and are used as so-called black boxes. Especially in sensitive areas such as medicine, the explainability of decisions is of paramount importance in order to verify their correctness and to be able to evaluate alternatives. For this reason, there is active research going on to elucidate these black boxes. METHOD: This review paper presents different approaches for explainable artificial intelligence with their advantages and disadvantages. Examples are used to illustrate the introduced methods. This study is intended to enable the reader to better assess the limitations of the corresponding explanations when meeting them in practice and strengthen the integration of such solutions in new research projects. RESULTS AND CONCLUSION: Besides methods to analyze black-box models for explainability, interpretable models offer an interesting alternative. Here, explainability is part of the process and the learned model knowledge can be verified with expert knowledge. KEY POINTS: · The use of artificial intelligence in radiology offers many possibilities to provide safer and more efficient medical care. This includes, but is not limited to support during image acquisition and processing or for diagnosis.. · Complex models can achieve high accuracy, but make it difficult to understand data processing.. · If the explainability is already taken into account during the planning of the model, methods can be developed that are powerful and interpretable at the same time.. CITATION FORMAT: · Gallée L, Kniesel H, Ropinski T et al. Artificial intelligence in radiology - beyond the black box. Fortschr Röntgenstr 2023; 195: 797 - 803.
背景:人工智能在放射学中发挥着越来越重要的作用。然而,越来越难以重构决策过程,尤其是对于深度学习领域的新型强大方法而言。由此产生的模型在运行时用户无法理解其内部过程,被用作所谓的黑箱。特别是在医学等敏感领域,决策的可解释性至关重要,以便验证其正确性并能够评估其他方案。因此,目前正在积极开展研究以阐明这些黑箱。 方法:本文综述了可解释人工智能的不同方法及其优缺点。通过实例来说明所介绍的方法。本研究旨在使读者在实际应用中遇到相应解释时,能更好地评估其局限性,并加强此类解决方案在新研究项目中的整合。 结果与结论:除了分析黑箱模型可解释性的方法外,可解释模型提供了一个有趣的选择。在这里,可解释性是过程的一部分,并且所学的模型知识可以用专家知识进行验证。 关键点:· 在放射学中使用人工智能为提供更安全、更高效的医疗护理提供了许多可能性。这包括但不限于在图像采集和处理过程中提供支持或用于诊断。· 复杂模型可以实现高精度,但难以理解数据处理过程。· 如果在模型规划阶段就考虑到可解释性,则可以开发出既强大又可解释的方法。 引用格式:· Gallée L, Kniesel H, Ropinski T等。放射学中的人工智能——超越黑箱。Fortschr Röntgenstr 2023; 195: 797 - 803。
J Pathol Clin Res. 2023-7
Br J Radiol. 2023-10
Clin Exp Emerg Med. 2023-12
Wiley Interdiscip Rev Data Min Knowl Discov. 2019
Stud Health Technol Inform. 2022-1-14
PeerJ Comput Sci. 2025-5-29
Eur J Breast Health. 2024-4-1
Graefes Arch Clin Exp Ophthalmol. 2024-9