Suppr超能文献

用于病理学中人工智能模型机制可解释性的反事实扩散模型

Counterfactual Diffusion Models for Mechanistic Explainability of Artificial Intelligence Models in Pathology.

作者信息

Žigutytė Laura, Lenz Tim, Han Tianyu, Hewitt Katherine J, Reitsam Nic G, Foersch Sebastian, Carrero Zunamys I, Unger Michaela, Pearson Alexander T, Truhn Daniel, Kather Jakob Nikolas

机构信息

Else Kroener Fresenius Center for Digital Health (EKFZ), Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany.

Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Germany.

出版信息

bioRxiv. 2025 Jan 8:2024.10.29.620913. doi: 10.1101/2024.10.29.620913.

Abstract

BACKGROUND

Deep learning can extract predictive and prognostic biomarkers from histopathology whole slide images, but its interpretability remains elusive.

METHODS

We develop and validate MoPaDi (Morphing histoPathology Diffusion), which generates counterfactual mechanistic explanations. MoPaDi uses diffusion autoencoders to manipulate pathology image patches and flip their biomarker status by changing the morphology. Importantly, MoPaDi includes multiple instance learning for weakly supervised problems. We validate our method on four datasets classifying tissue types, cancer types within different organs, center of slide origin, and a biomarker - microsatellite instability. Counterfactual transitions were evaluated through pathologists' user studies and quantitative cell analysis.

RESULTS

MoPaDi achieves excellent image reconstruction quality (multiscale structural similarity index measure 0.966-0.992) and good classification performance (AUCs 0.76-0.98). In a blinded user study for tissue-type counterfactuals, counterfactual images were realistic (63.3-73.3% of original images identified correctly). For other tasks, pathologists identified meaningful morphological features from counterfactual images.

CONCLUSION

MoPaDi generates realistic counterfactual explanations that reveal key morphological features driving deep learning model predictions in histopathology, improving interpretability.

摘要

背景

深度学习可以从组织病理学全切片图像中提取预测性和预后性生物标志物,但其可解释性仍然难以捉摸。

方法

我们开发并验证了MoPaDi(形态学组织病理学扩散),它能生成反事实机制解释。MoPaDi使用扩散自动编码器来操纵病理图像块,并通过改变形态翻转其生物标志物状态。重要的是,MoPaDi包括针对弱监督问题的多实例学习。我们在四个数据集上验证了我们的方法,这些数据集用于对组织类型、不同器官内的癌症类型、玻片起源中心以及一种生物标志物——微卫星不稳定性进行分类。通过病理学家的用户研究和定量细胞分析来评估反事实转变。

结果

MoPaDi实现了出色的图像重建质量(多尺度结构相似性指数测量值为0.966 - 0.992)和良好的分类性能(AUC为0.76 - 0.98)。在一项针对组织类型反事实的盲法用户研究中,反事实图像很逼真(正确识别出63.3 - 73.3%的原始图像)。对于其他任务,病理学家从反事实图像中识别出了有意义的形态学特征。

结论

MoPaDi生成了逼真的反事实解释,揭示了驱动组织病理学深度学习模型预测的关键形态学特征,提高了可解释性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8527/11727472/f3680f61e980/nihpp-2024.10.29.620913v2-f0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验