使用生成对抗学习为医学非专业人员提供反事实-反事实解释。

GANterfactual-Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning.

作者信息

Mertes Silvan, Huber Tobias, Weitz Katharina, Heimerl Alexander, André Elisabeth

机构信息

Lab for Human-Centered Artificial Intelligence, Augsburg University, Augsburg, Germany.

出版信息

Front Artif Intell. 2022 Apr 8;5:825565. doi: 10.3389/frai.2022.825565. eCollection 2022.

Abstract

With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting of important areas of the input data. Contrary, counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image in a way such that the classifier would have made a different prediction. By doing so, the users of counterfactual explanation systems are equipped with a completely different kind of explanatory information. However, methods for generating realistic counterfactual explanations for image classifiers are still rare. Especially in medical contexts, where relevant information often consists of textural and structural information, high-quality counterfactual images have the potential to give meaningful insights into decision processes. In this work, we present , an approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques. Additionally, we conduct a user study to evaluate our approach in an exemplary medical use case. Our results show that, in the chosen medical use-case, counterfactual explanations lead to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems that work with saliency maps, namely LIME and LRP.

摘要

随着机器学习的不断发展,解释人工智能系统决策的方法需求正成为一个越来越重要的话题。特别是对于图像分类任务,许多用于解释此类分类器的先进工具依赖于对输入数据重要区域的视觉突出显示。相反,反事实解释系统试图通过以某种方式修改输入图像来实现反事实推理,以使分类器做出不同的预测。通过这样做,反事实解释系统的用户获得了一种完全不同类型的解释信息。然而,为图像分类器生成逼真的反事实解释的方法仍然很少。特别是在医学背景下,相关信息通常由纹理和结构信息组成,高质量的反事实图像有可能为决策过程提供有意义的见解。在这项工作中,我们提出了一种基于对抗性图像到图像翻译技术生成此类反事实图像解释的方法。此外,我们进行了一项用户研究,以在一个示例性医学用例中评估我们的方法。我们的结果表明,在所选的医学用例中,与使用显著性图的两个先进系统(即LIME和LRP)相比,反事实解释在心理模型、解释满意度、信任、情感和自我效能方面产生了显著更好的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8fb/9024220/e430609bce6e/frai-05-825565-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索