文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

数字病理学中的可解释性和可归因性。

Explainability and causability in digital pathology.

机构信息

Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria.

Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Pathology, Berlin, Germany.

出版信息

J Pathol Clin Res. 2023 Jul;9(4):251-260. doi: 10.1002/cjp2.322. Epub 2023 Apr 12.


DOI:10.1002/cjp2.322
PMID:37045794
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10240147/
Abstract

The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains - even to their developers - often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive 'what-if'-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.

摘要

当前向数字病理学的转变使病理学家能够使用基于人工智能(AI)的计算机程序对全幻灯片图像进行高级分析。然而,目前,图像分析性能最佳的 AI 算法被认为是黑盒,因为即使是其开发者,也常常不清楚算法为什么会给出特定的结果。特别是在医学领域,更好地理解算法决策对于避免错误和对患者产生不良影响至关重要。这篇综述文章旨在为医学专家提供数字病理学中可解释性问题的见解。对机器学习相关核心概念的简短介绍将培养读者对为什么可解释性是该领域的一个特定问题的理解。为了解决可解释性问题,可解释人工智能(XAI)这一快速发展的研究领域已经开发了许多技术和方法,以使黑盒机器学习系统更加透明。这些 XAI 方法是使黑盒 AI 系统被人类理解的第一步。然而,我们认为,解释界面必须补充这些可解释模型,才能使它们的结果对人类利益相关者有用,并实现高因果性,即用户的高因果理解水平。这在医学领域尤为重要,因为可解释性和因果性对于符合监管要求也起着至关重要的作用。我们最后提倡需要为病理学中的 AI 应用程序开发新的用户界面,这些界面能够实现上下文理解,并允许医学专家提出交互式的“假设性”问题。在病理学中,这样的用户界面不仅对于实现高因果性很重要,对于保持人机交互和将医学专家的经验和概念知识应用于 AI 流程也至关重要。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4836/10240147/2a7c9e4c1e63/CJP2-9-251-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4836/10240147/9554ec55347b/CJP2-9-251-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4836/10240147/2a7c9e4c1e63/CJP2-9-251-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4836/10240147/9554ec55347b/CJP2-9-251-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4836/10240147/2a7c9e4c1e63/CJP2-9-251-g002.jpg

相似文献

[1]
Explainability and causability in digital pathology.

J Pathol Clin Res. 2023-7

[2]
Causability and explainability of artificial intelligence in medicine.

Wiley Interdiscip Rev Data Min Knowl Discov. 2019

[3]
Explainable AI and Multi-Modal Causability in Medicine.

I Com (Berl). 2021-1-26

[4]
Explainable AI (xAI) for Anatomic Pathology.

Adv Anat Pathol. 2020-7

[5]
Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation.

N Biotechnol. 2022-9-25

[6]
Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.

Kunstliche Intell (Oldenbourg). 2020

[7]
Artificial intelligence and explanation: How, why, and when to explain black boxes.

Eur J Radiol. 2024-4

[8]
CLARUS: An interactive explainable AI platform for manual counterfactuals in graph neural networks.

J Biomed Inform. 2024-2

[9]
Explainable AI for Bioinformatics: Methods, Tools and Applications.

Brief Bioinform. 2023-9-20

[10]
Revolutionizing Digital Pathology With the Power of Generative Artificial Intelligence and Foundation Models.

Lab Invest. 2023-11

引用本文的文献

[1]
Enhanced metastasis risk prediction in cutaneous squamous cell carcinoma using deep learning and computational histopathology.

NPJ Precis Oncol. 2025-9-2

[2]
Advancing open-source visual analytics in digital pathology: A systematic review of tools, trends, and clinical applications.

J Pathol Inform. 2025-5-23

[3]
Recent Advances in Artificial Intelligence for Precision Diagnosis and Treatment of Bladder Cancer: A Review.

Ann Surg Oncol. 2025-4-12

[4]
Debating the pros and cons of computational pathology at the European Congress of Pathology (ECP) 2024.

Virchows Arch. 2025-3-25

[5]
The Role of Machine Learning Models in Predicting Cirrhosis Mortality: A Systematic Review.

Cureus. 2025-1-28

[6]
Reproducibility and explainability in digital pathology: The need to make black-box artificial intelligence systems more transparent.

J Public Health Res. 2024-10-29

[7]
Role of artificial intelligence in haematolymphoid diagnostics.

Histopathology. 2025-1

[8]
Exploring prognostic biomarkers in pathological images of colorectal cancer patients via deep learning.

J Pathol Clin Res. 2024-11

[9]
Explainable AI for computational pathology identifies model limitations and tissue biomarkers.

ArXiv. 2024-11-18

[10]
Decoding pathology: the role of computational pathology in research and diagnostics.

Pflugers Arch. 2025-4

本文引用的文献

[1]
Recommendations on compiling test datasets for evaluating artificial intelligence solutions in pathology.

Mod Pathol. 2022-12

[2]
The state of the art for artificial intelligence in lung digital pathology.

J Pathol. 2022-7

[3]
Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation.

N Biotechnol. 2022-9-25

[4]
Artificial intelligence to identify genetic alterations in conventional histopathology.

J Pathol. 2022-7

[5]
Computational pathology for musculoskeletal conditions using machine learning: advances, trends, and challenges.

Arthritis Res Ther. 2022-3-11

[6]
Artificial intelligence for diagnosis and Gleason grading of prostate cancer: the PANDA challenge.

Nat Med. 2022-1

[7]
The Utility of Unsupervised Machine Learning in Anatomic Pathology.

Am J Clin Pathol. 2022-1-6

[8]
Independent real-world application of a clinical-grade automated prostate cancer detection system.

J Pathol. 2021-6

[9]
Interpretable survival prediction for colorectal cancer using deep learning.

NPJ Digit Med. 2021-4-19

[10]
Hidden Variables in Deep Learning Digital Pathology and Their Potential to Cause Batch Effects: Prediction Model Study.

J Med Internet Res. 2021-2-2

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索