Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria.
Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Pathology, Berlin, Germany.
J Pathol Clin Res. 2023 Jul;9(4):251-260. doi: 10.1002/cjp2.322. Epub 2023 Apr 12.
The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains - even to their developers - often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive 'what-if'-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.
当前向数字病理学的转变使病理学家能够使用基于人工智能(AI)的计算机程序对全幻灯片图像进行高级分析。然而,目前,图像分析性能最佳的 AI 算法被认为是黑盒,因为即使是其开发者,也常常不清楚算法为什么会给出特定的结果。特别是在医学领域,更好地理解算法决策对于避免错误和对患者产生不良影响至关重要。这篇综述文章旨在为医学专家提供数字病理学中可解释性问题的见解。对机器学习相关核心概念的简短介绍将培养读者对为什么可解释性是该领域的一个特定问题的理解。为了解决可解释性问题,可解释人工智能(XAI)这一快速发展的研究领域已经开发了许多技术和方法,以使黑盒机器学习系统更加透明。这些 XAI 方法是使黑盒 AI 系统被人类理解的第一步。然而,我们认为,解释界面必须补充这些可解释模型,才能使它们的结果对人类利益相关者有用,并实现高因果性,即用户的高因果理解水平。这在医学领域尤为重要,因为可解释性和因果性对于符合监管要求也起着至关重要的作用。我们最后提倡需要为病理学中的 AI 应用程序开发新的用户界面,这些界面能够实现上下文理解,并允许医学专家提出交互式的“假设性”问题。在病理学中,这样的用户界面不仅对于实现高因果性很重要,对于保持人机交互和将医学专家的经验和概念知识应用于 AI 流程也至关重要。
J Pathol Clin Res. 2023-7
Wiley Interdiscip Rev Data Min Knowl Discov. 2019
I Com (Berl). 2021-1-26
Adv Anat Pathol. 2020-7
Kunstliche Intell (Oldenbourg). 2020
Brief Bioinform. 2023-9-20
Histopathology. 2025-1
J Pathol Clin Res. 2024-11
Pflugers Arch. 2025-4
Arthritis Res Ther. 2022-3-11
Am J Clin Pathol. 2022-1-6
NPJ Digit Med. 2021-4-19